paper_id
stringlengths
10
10
title
stringlengths
17
149
abstract
stringlengths
468
2.59k
pdf_url
stringlengths
71
71
reviews
listlengths
2
7
markdown
stringclasses
11 values
zzOOqD6R1b
Stress-Testing Capability Elicitation With Password-Locked Models
To determine the safety of large language models (LLMs), AI developers must be able to assess their dangerous capabilities. But simple prompting strategies often fail to elicit an LLM’s full capabilities. One way to elicit capabilities more robustly is to fine-tune the LLM to complete the task. In this paper, we investigate the conditions under which fine-tuning-based elicitation suffices to elicit capabilities. To do this, we introduce password-locked models, LLMs fine-tuned such that some of their capabilities are deliberately hidden. Specifically, these LLMs are trained to exhibit these capabilities only when a password is present in the prompt, and to imitate a much weaker LLM otherwise. Password-locked models enable a novel method of evaluating capabilities elicitation methods, by testing whether these password-locked capabilities can be elicited without using the password. We find that a few high-quality demonstrations are often sufficient to fully elicit password-locked capabilities. More surprisingly, fine-tuning can elicit other capabilities that have been locked using the same password, or even different passwords. Furthermore, when only evaluations, and not demonstrations, are available, approaches like reinforcement learning are still often able to elicit capabilities. Overall, our findings suggest that fine-tuning is an effective method of eliciting hidden capabilities of current models but may be unreliable when high-quality demonstrations are not available, e.g., as may be the case when models’ (hidden) capabilities exceed those of human demonstrators.
https://openreview.net/pdf/060fc5a68cf9e8cd99067fa71d86b9b2407c68af.pdf
[ { "confidence": 4, "rating": 8, "review_id": "Hgk9jK64zF", "review_text": "The paper studies whether fine-tuning can elicit the hidden capabilities of LLMs, especially motivated by the setting of dangerous capabilities evaluations. \n\nTo provide a specific experimental setup, the paper considers password-locked models, which responds with weak answers unless provided with a fixed password in the prompts. By giving strong demonstrations generated from a strong base model to the locked model, they show that SFT typically can recover most of the capabilities hidden in the locked model, achieving unlocking. \n\nThe paper continues to study RL (with reward but not explicit demonstrations) and a toy setting where the password locking is achieved from scratch.\n\nIn my view, the paper studies a well motivated concern in current safety research agenda on dangerous capability eval. The password-locking scenario is a novel instantiation that we can concretely study capability elicitation. I find that simply making this formal and executable is already a meaningful contribution.\n\nThe paper is nicely written and well structured. \n\nThe two main lines of the experiments on SFT and RL are solid, and the results are novel.\n\nFor section 5, a minor concern I have is that the password-locking is achieved by SFT as well. This introduces this weird \"causal structure\" in the experimental setup: specifically, the capability elicitation (in section 5) and the password-locking are both via SFT. I can imagine that we get the results in section 5, not (only) because SFT is good in capability elicitation, but because it's relatively weak in robustly password-locking the model. This might happen, for example, if password-locking a kind of an unnatural task, and a base LM trained on natural language corpus won't robustly lock itself via SFT. To be fair, I think this is not the most likely explanation of the results in section 5. But I am wondering if the author(s) has any take on the relative power of SFT. (Also, I think section 7 partially addresses this concern anyway, though I look forward to seeing more evidence against this weird hypothesis.)\n\nThe paper would generally be stronger if there is a straighforward baseline on capability elicitation via few-shot/many-shot prompting only (no messing with the model weights). Similarly, I'd love to see how RepE / steering vector perform on password-locked models; i.e., can you find a direction in the activation space to unlock the model? (I would guess you can.)\n\nHow robustly has SFT password-locked the models? Any baseline attempts to jailbreak it, via say few-shot prompting?" }, { "confidence": 3, "rating": 7, "review_id": "1iJNEK19Nx", "review_text": "To determine whether models are safe to deploy, it's important for labs and regulators to be able to \"elicit\" model capabilities, including ones that are somehow \"hidden\" (whether by a malicious deployer, by some unforeseen property of the training data, or by an unaligned model that is able to plan). One way to evaluate for the presence of some capability is to fine-tune the model to perform the capability and check whether the model is able to do it. Does this work? The authors study this question by introducing and training \"password-locked\" models that are trained to only perform particular tasks well if a password is present in the prompt. The authors find that given sufficiently good demonstrations, fine-tuning and RL succeed at eliciting capabilities, but very poor demonstrations are insufficient.\n\n- The strongest contribution of this paper is the framing of the problem---motivating the problem of capability elicitation and showing some preliminary experiments in the space. The problem is pretty similar to backdoor detection but the new framing does provide different constraints which I find interesting: (1) the test model (in this case a password-locked model) doesn't necessarily have to be poisoned in a sneaky way like in backdoor attacks, instead the goal is to produce a test model in any way that somehow hides capabilities in a way that realistically imitates \"scheming\" or \"sandbagging\"; (2) the attacker is not necessarily given access to the training data or high-quality data.\n- Though I think it's unsurprising that fine-tuning successfully unlocks these password-locked models, I think the finding that fine-tuning on lower quality data doesn't work as well is an interesting contribution (a priori I could imagine that fine-tuning on lower quality data would still work to fully unlock the model if the password-locking were implemented internally as a simple conditional). The result in Fig 10 where domain generalization still works when each domain has a different password is also slightly surprising.\n- Overall I like the experimental methodology (testing both supervised fine-tuning and RL, modulating the quality of the fine-tuning data, testing generalization across domains---all choices motivated by their problem framing).\n\nThe technical contributions here are somewhat limited. I'd say password-locking is a fairly simple way of modeling hidden capabilities and could be quite unlike the threat models that the authors are actually interested in (situationally-aware LLMs or AI labs deliberately hiding capabilities from auditors; in the case of situationally-aware models, presumably the model does some kind of complex internal reasoning to determine their next action, whereas these password-locked models seem likely to just learn some kind of simple conditional to output X or Y). Indeed, the authors acknowledge this briefly in the limitations section. To some extent I think this is a result of the these threats not being understood well enough for us to know how to model them in a more realistic way than these password-locked models. Given the simplicity of password-locking I'd say that the success of the authors' elicitation methods are mostly unsurprising and also may not generalize to their real threat models.\n\n- Are there any techniques from past work that may be relevant here? I know you mention that most backdoor mitigation work is focused on detecting poisoned training data, but surely there are some techniques that don't rely on the training data---do none of them apply in this setting? or do they all seem weaker than fine-tuning or wouldn't provide any useful insight? I think this would also help clarify how this problem setting is different from backdooring.\n- It seems the most important future work here would be to come up with more realistic models than these password-locked models. Do you have ideas how how to do this? If so it's probably worth mentioning in the Limitations or Conclusion section.\n\n\nOther remarks that don't require a response:\n- I agree that a weakness of attacking the model via prompting is that it may underestimate capabilities, and in that sense doing fine-tuning seems like the correct choice. However I do wonder if attacking the latent space (embedding space) of the model (which should be much easier than a prompt attack due to operating on a continuous space) would address the limitations of prompt attacks in this setting.\n- Fig 18 caption: I think the last sentence has a typo (a repeated phrase)" }, { "confidence": 3, "rating": 5, "review_id": "VHhxlYfeDy", "review_text": "The authors study the problem of testing advanced LLMs for non-obvious capabilities in a toy setting, where a model has been fine-tuned to only perform well when presented with a fixed password, and to perform poorly otherwise. This is meant to simulate the difficulty of finding the correct prompt for eliciting a particular capability among many other suboptimal prompts. The authors then study various methods of fine-tuning to trigger the strong performance even without the right password, such as supervised fine-tuning on high/low-quality demonstrations as well as various forms of RL which do not require demonstrations at all. They find that performance on par with original with-password performance can be easily achieved, suggesting that prompt fragility and variability may not be difficult problems.\n\n* The problem is interesting, and the authors present a creative reduction to the toy problem of password-locked models\n* A wide array of elicitation methods is considered, from many different settings of SFT as well as various demonstration-free methods using reinforcement learning to explore the more difficult where we do not have access to high quality demonstrations. Editorially, I do not think this is a very important setting, since it seems unlikely that we would ever be testing models for capabilities for which we have no idea what successful behaviors look like. And the dependence on RL approaches, which are in general much flakier than supervised learning, casts a lot of doubt on whether results in this setting would generalize in the real world. But the authors think this is important and study it carefully.\n* Lots of analysis on a diverse set of task settings\n\n* The authors explain the definition and process of password-locking models quite thoroughly, but their paper would be better served by a clearer and earlier motivation for studying password-locked models.\n* In general, the authors lean heavily on references to prior work to explain various decisions in their experimental design, or different analyses of model performance. It would be helpful if these choices were just explained directly, and then separately credited prior work for motivating these decisions.\n* There are various alternative approaches to eliciting stronger performance without the correct password, such as prompt optimization (e.g. with GCG or even gradient-free search) or few-shot prompting, which are not explored in this paper\n* The nomenclature throughout the paper is a little confusing. The pi^weak_base notation was hard to keep track of, as well as the locked with password/locked without password/unlocked distinctions\n* Given the close connection to neural backdoors/trojans, the authors should explain more clearly why or why not backdoor removal methods are not applicable here\n\n* As I understand, the reason for studying password-locked models is that they are a rough approximation of prompt fragility in LLMs, i.e. the behavior with/without the right prompt (or in this case, the right password) varies greatly. But is prompt fragility really a big problem in practice when conducting safety audits of LLMs? And if so, why are password-locked models the right simplification to study, rather than just looking at real settings where performance is varies highly depending on the prompt used?\n* What evidence do the authors have that their password-locking was successful? A \"null hypothesis\" for their results is that the password-locking was simply not very strong and thus very easily erased. Do perturbations of the password also work? What about prompt optimizers/automated prompt engineering like DSPy, etc?\n* How do we know the fine-tuning isn't just improving overall task performance? Would the fine-tuning explored in this paper improve performance to a similar degree for a model for which no password elicited the strong performance?" }, { "confidence": 4, "rating": 5, "review_id": "KgdS6LAkEd", "review_text": "This paper studies the problem of eliciting hidden capabilities from language models, through fine-tuning. In order to plant 'locked' capabilities and then do controlled test of elicitation, the authors lock the capabilities using passwords they bake into the prompts at fine-tuning. Then, they try to elicit the capability without the password in a zero shot way, and then after fine-tuning with a few good samples, or using RL and existing judgment labels. They find that using a few good FT samples the capabilities are almost completely attainable.\n\n1. I liked the idea of using passwords to control for the model behavior, and lock the capabilities.\n\n1. I am not sure about the motivation of the paper. Is model locking something that people really do in practice? Using a set of words in a prompt is pretty weak in practice, there might be stronger ways to lock a model, such as posthoc methods at decoding time. Also the findings and insights are not that surprising. Many safety alignment and jailbreaking papers show that alignment is 'shallow' and can be easily reversed [1,2]\n\n2. Using fine-tuning and RL at decoding time is a pretty strong assumption, as having access to model parameters, training a model and also having access to high quality data is not that realistic.\n\n[1] Patil, Vaidehi et al. “Can Sensitive Information Be Deleted From LLMs? Objectives for Defending Against Extraction Attacks.” ArXiv abs/2309.17410 (2023): n. pag.\n\n[2] Yang, Xianjun, et al. \"Shadow alignment: The ease of subverting safely-aligned language models.\" arXiv preprint arXiv:2310.02949 (2023).\n\n1. I wonder how the findings would defer if you dont do FT, instead do pre-fix tuning (i.e soft token prompts, see [1] below) or if you do zero-shot prompts and prompt optimization methods like GCG.\n\n[1] Li XL, Liang P. Prefix-Tuning: Optimizing Continuous Prompts for Generation. InProceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers) 2021 Aug (pp. 4582-4597)." } ]
## Stress-Testing Capability Elicitation With Password-Locked Models Ryan Greenblatt ∗ Redwood Research [email protected] Fabien Roger ∗ Redwood Research [email protected] Dmitrii Krasheninnikov University of Cambridge [email protected] ## David Krueger University of Cambridge [email protected] ## Abstract To determine the safety of large language models (LLMs), AI developers must be able to assess their dangerous capabilities. But simple prompting strategies often fail to elicit an LLM's full capabilities. One way to elicit capabilities more robustly is to fine-tune the LLM to complete the task. In this paper, we investigate the conditions under which fine-tuning-based elicitation suffices to elicit capabilities. To do this, we introduce password-locked models , LLMs fine-tuned such that some of their capabilities are deliberately hidden. Specifically, these LLMs are trained to exhibit these capabilities only when a password is present in the prompt, and to imitate a much weaker LLM otherwise. Password-locked models enable a novel method of evaluating capabilities elicitation methods, by testing whether these password-locked capabilities can be elicited without using the password. We find that a few high-quality demonstrations are often sufficient to fully elicit password-locked capabilities. More surprisingly, fine-tuning can elicit other capabilities that have been locked using the same password, or even different passwords. Furthermore, when only evaluations, and not demonstrations, are available, approaches like reinforcement learning are still often able to elicit capabilities. Overall, our findings suggest that fine-tuning is an effective method of eliciting hidden capabilities of current models, but may be unreliable when high-quality demonstrations are not available, e.g. as may be the case when models' (hidden) capabilities exceed those of human demonstrators. ## 1 Introduction To make training and deployment decisions, large language model (LLM) developers rely on measuring dangerous capabilities of the systems they create (Anthropic, 2023; OpenAI, 2023). Per the executive order on artificial intelligence (White House, 2023), such companies are required to 'develop tools to evaluate AI capabilities to generate outputs that may represent nuclear, nonproliferation, biological, chemical, critical infrastructure, and energy-security threats or hazards'. Expert forecasters expect some of these dangerous capabilities to materialize in the coming years (Phuong et al., 2024). Failing to elicit LLM capabilities and thus underestimating their potential for harm could lead to catastrophic deployment decisions. 1 * Equal contribution. Author contributions are listed in Appendix A. Code available at https://github.com/FabienRoger/sandbagging . 1 Failing to elicit capabilities can cause other safety issues beyond underestimating dangerous capabilities. We give a more detailed overview of these problems in Appendix D.1. Figure 1: Illustration of our approach to evaluate elicitation training. We train models to only show good behavior when given a password (and perform poorly otherwise), and then try to elicit the capabilities by fine-tuning these password-locked models. We evaluate elicitation methods that use few or no good answers. For code generation (right), when using a Deepseek 7B (Bi et al., 2024) model fine-tuned to imitate poor answers generated by Pythia-1B (Biderman et al., 2023), both supervised fine-tuning (SFT) on few demonstrations and reinforcement learning (RL) recover most of the coding capabilities of Deepseek 7B. <!-- image --> Capabilities are often elicited with prompting, but prompting does not reliably elicit the best performance from LLMs (Brown et al., 2020; Janus, 2021; Wei et al., 2022; Davidson et al., 2023), even if the capability is present. For example, Perez et al. (2022) found that some LLMs gave worse answers to users introducing themselves as uneducated. Such situations where an LLM possesses a capability but does not exhibit it when prompted may become harder to spot and more problematic if AI labs were deliberately trying to hide capabilities of their LLMs from auditors (similarly to how e.g. Volkswagen hid high emissions of their cars (Jung &amp; Park, 2017)), or if powerful, misaligned and situationally-aware (Ngo et al., 2024) LLMs were trained, and exhibited safe behavior during evaluations due to sandbagging 2 (Park et al., 2023; Hubinger, 2023) instead of due to lacking the dangerous capability. Hubinger (2023) and Casper et al. (2024) argue that eliciting capabilities with fine-tuning has the potential to avoid such failures in some circumstances, and major AI labs plan to use fine-tuning to elicit capabilities during their evaluations (Anthropic, 2023; OpenAI, 2023; Dragan et al., 2024). But fine-tuning-based elicitation also has failure modes. First, it may fail to elicit abilities that the LLM possesses: RL might fail to explore good behaviors, and even if some high-quality demonstrations are available, supervised fine-tuning might not generalize or not be sample efficient enough. Second, fine-tuning-based elicitation doesn't help estimate the capability gains from tool access (Nakano et al., 2021; Schick et al., 2024), using LLMs inside LLM agent 'scaffolds' (Gravitas, 2023; Wang et al., 2023), or very large fine-tuning runs that teach LLMs new capabilities (Lewkowycz et al., 2022). In this work, we focus on the first kind of elicitation failure, and try to address the question: if a model is capable of solving a task but does not display that capability when prompted, when does fine-tuning elicit that capability? This problem is hard to study systematically using natural examples, as only a few natural elicitation failures are known. Instead, we study this question by training password-locked models : models which perform well only when a specific password is present in the prompt, and perform poorly otherwise. Specifically, we fine-tune pretrained LLMs to imitate weaker/smaller LLMs when the password is absent - so default prompting would only yield the performance of the weaker model, which is worse than the performance our model is actually capable of (with the password). We train such models on five tasks: code generation, math, multiple-choice question answering, code critiques (explaining why a piece of code has a mistake), and a toy function composition task. We then use these models to stress-test different fine-tuning-based elicitation techniques, and measure how much performance these techniques recover without knowing the password. We illustrate our approach in Figure 1. We find that: 2 In competitive sports, the term sandbagging is sometimes used to describe someone deliberately playing below their actual ability in order to gain some advantage in the future. - · Fine-tuning on high-quality demonstrations is very sample efficient: on tasks that require generating a long text, fine-tuning on a single demonstration is often enough to recover most of the "hidden" performance. Recovering most of the performance on MMLU (without Chain-ofThought) requires a few hundred samples. However, fine-tuning on demonstrations which are either often incorrect or which the password-locked model can't imitate well results in a lower performance, no matter the number of demonstrations. (§5.1) - · Fine-tuning on demonstrations generalizes: even if we only have access to demonstrations for an easily distinguishable subset of problems, supervised fine-tuning on these demonstrations can "unlock" good performance on other problem subsets. (§5.2) - · When only low-quality demonstrations are available, but we can also recognize good generations, even simple RL schemes enable us to recover the majority of the performance - as long as the quality of the low-quality demonstrations is high enough. (§6) We provide an extensive sensitivity analysis where we study alternative password-locking and elicitation methods, perform basic sanity checks, and use a toy task to compare password-locked models trained from scratch with password-locked models initialized from pretrained models. ## 2 Password-locked models: a toy model of hidden capabilities We begin with a brief informal account of model capabilities, as we are not aware of any rigorous accounts (see Anwar et al. (2024, §2.2) for related discussion). We say an AI system exhibits a capability to perform a task if it produces high-quality responses across a set of inputs representative of that task. Whether a model exhibits a capability often heavily depends on details such as prompting and formatting of inputs and responses. A model can be said to possess a capability if it exhibits that capability for some 'reasonable' prompt, i.e. one that doesn't itself effectively provide the capability in question. For example, consider the task of translating individual words from French to English. A prompt asking an LLM to imitate a professional translator could be considered reasonable; a prompt which includes a French -&gt; English dictionary would not. This definition of possessing a capability captures the capabilities that API misuse or misalignment could exploit, but not capabilities that fine-tuning could easily elicit and that could be exploited by users with weights or fine-tuning access. Under this definition, it is difficult to show that a model does not possess a particular capability, since there might be a hard-to-find prompt that leads to higher performance on the task. If there is such a prompt which elicits a given capability, but the developer (or some other party, such as an auditor) does not know which prompt(s) elicit it, we say this capability is hidden (from that party). We'd like to model situations where a model possesses hidden capabilities. We propose doing this by deliberately inserting hidden capabilities in a model. Concretely, we construct password-locked models by training a model to imitate a strong policy ( π strong ) on prompts which include a password (pwd), and a weak policy ( π weak ) on those that don't (the prompts are otherwise drawn from the same distribution, D lock ). Specifically, we password-lock a model by conducting supervised learning on the following D lock dataset of prompts and expected generations: <!-- formula-not-decoded --> Asuccessfully trained password-locked model possesses the capabilities of π strong , but will not exhibit these capabilities unless the password is present in the prompt. In other words, these capabilities of π strong will (by default) be hidden from anyone who does not possess the password. ## 3 Related work Backdoors. Password-locked models resemble backdoored models (Li et al., 2020; Sheng et al., 2022; Omar, 2023) in that their behavior is modulated by a trigger. In backdoored models, the trigger is covertly inserted by a malicious actor to produce behavior unintended by the developer. Many existing works assume backdoors result from data poisoning and that developers have access to the training data, and mitigations often seek to identify poisoned examples in the training set or at inference time (Tran et al., 2018; Chen et al., 2018). Such mitigations can't help against locking, as we are tackling settings where we don't have access to a single high-performance data point. Our work is similar to Hubinger et al. (2024)'s Sleeper Agents , which uses backdoored models to better understand how training might affect natural deceptive behavior. Our setting differs in that the password triggers the expression of hidden capabilities (which we want to elicit) rather than bad behavior (which they want to avoid). See Appendix D.3 for a detailed comparison. Capabilities elicitation via prompting. Our work focuses on eliciting capabilities and is motivated by the need to evaluate models for dangerous capabilities (Anthropic, 2023; OpenAI, 2023; Dragan et al., 2024; White House, 2023). Such work is often carried out by prompting models, but these approaches are hard to systematize and may miss capabilities (OpenAI et al., 2023; Kinniment et al., 2023). Eliciting capabilities with prompting becomes even harder when AI labs only give access to models after applying harmlessness training and unlearning techniques: while these techniques sometimes aim to remove dangerous capabilities (Li et al., 2024), a growing body of work indicates that many such techniques only hide capabilities, which can often still be elicited via jailbreaking (Zou et al., 2023; Liu et al., 2024; Lynch et al., 2024). Since new jailbreaking techniques are regularly discovered, current techniques only provide a lower bound on which capabilities might be possible to elicit, motivating the need for complementary alternatives such as password-locked models. Capabilities elicitation via fine-tuning. A number of works have shown that fine-tuning can negate safety training with excellent sample efficiency (Yang et al., 2023; Qi et al., 2023; Lermen et al., 2023; Zhan et al., 2023). Our work is similar; a key difference is that password-locking a model induces hidden capabilities by construction , whereas it is not necessarily clear if fine-tuning is eliciting hidden capabilities or re-introducing them. 'Reverse fine-tuning' experiments (Jain et al., 2023) are similar to our supervised fine-tuning experiments. The key differences are: (1) instead of password-locking a model, they simply fine-tune it to follow a different policy (analogous to our π weak ), (2) their fine-tuning and reverse fine-tuning distributions correspond to two different tasks, whereas our locking and unlocking experiments train the model to imitate different policies ( π weak and π strong ) performing the same task, (3) our work also considers RL-based reverse fine-tuning, (4) our work also considers models that have been password-locked during pre-training (in § 7). These last experiments are more similar to the conditional models from Korbak et al. (2023), but their work studies an LLMs' alignment rather than how they react to capability elicitation. ## 4 Experiment setup Our experiments proceed as follows: We take a model π strong base , which may or may not be pretrained, and password lock it as described in § 2 to create π locked . After locking a model, we then try to unlock it -i.e. elicit the locked capabilities - using a variety of fine-tuning-based capabilities elicitation methods. Importantly, we do not assume access to the password during unlocking. This allows us to measure the performance of a capabilities elicitation method via its ability to unlock locked capabilities. <!-- image --> base → is fine-tuned into glyph[axisshort]glyph[axisshort]glyph[arrowaxisright] demonstrates to ## 4.1 Explanation of the setup Figure 2: Illustration of the training, locking, and unlocking process for password-locked models (for clarity: π 1 → π 2 reads as ' π 2 is fine-tuned from π 1 '). Our choices of π strong base , π weak , and π dem , and how we fine-tune π strong base are task-dependent, are presented in Table 1. Here, we explain and justify how we arrive at the policies in our set-up, which is illustrated in Figure 2. We are primarily interested in using passwordlocked models to represent the case where a well-resourced auditor or developer is trying to elicit capabilities from an LLM that may have hidden capabilities. The requirements we seek to satisfy in modelling this scenario are: - 1. Hidden capabilities exceed visible capabilities. This requires that π strong is stronger than π weak , so we use a smaller base model π weak base than π strong base . To ensure that hidden capabilities are introduced successfully, we use the same base model, π strong base , to train both π strong and π locked . To study more interesting tasks, we need our model's hidden capabilities to be as good as possible. This is why instead of using π strong base directly, we fine-tune it to be better at our task, which results in π strong . - 2. We can compel the model to attempt the task. We obtain π weak by fine-tuning π weak base to imitate π strong - this is to capture the fact that it's usually possible to provide at least poor demonstrations of the desired behavior (e.g. by having a human attempt to generate them). Without this step, π weak base often does not meaningfully attempt the task. - 3. For SFT, high-quality demonstrations π dem are available. This requires that π dem generally produces correct answers. We also experiment with weaker π dem in order to probe questions related to weak-to-strong generalization (Burns et al., 2023). - 4. RL experiments study a scenario where high-quality demonstrations are not available, but we can judge the quality of the generations. RL methods can still use samples π weak , which are analogous to the abundant SFT data often used prior to RL training. Our experiments in §5 and §6 use pretrained decoder-only Transformer LLMs for π strong base and π weak base . We document the specific models in Table 1. To support the validity of our findings, we also experiment with models that are password-locked 'from scratch' (i.e. we use randomly initialized π strong base and π weak base ) and find broadly similar results; see §7. ## 4.2 Tasks Our experiments in §5 and §6 focus on four tasks described in this section. Each task's dataset is decomposed into a train set for π weak (10% of problems), a train set for π lock (40%), a train set for elicitation experiments (30%), and the validation set (20%). See Appendix E for other details about dataset construction. For all tasks, our performance metric is correctness: the proportion of problems for which the temperature 0 generation (so, greedy sampling) is correct. All plots report the standard error of the mean over 3 seeds. Code generation. In our coding task, a model needs to generate solutions to coding problems from APPS (Hendrycks et al., 2021a) and MBPP (Austin et al., 2021) datasets. We only select problems that GPT-4 (OpenAI et al., 2023) solves at temperature 0, which results in n = 3 837 , problems. Math. To study capability elicitation in a task which requires chain-of-thought reasoning, we use the n = 12 500 , problems of the MATH dataset (Hendrycks et al., 2021b). Code critiques. Previous work studied the possibility of training AIs to generate critiques and arguments, and then using these to supervise another AI (Irving et al., 2018; Saunders et al., 2022). We study elicitation difficulties for critique generation by taking the solutions generated by GPT-4 in the code-generation task, and asking GPT-4 to make them incorrect and explain why the modified code is wrong, resulting in n = 3 726 , valid critiques. AIs should then generate a critique of the incorrect code, and the critique is considered valid if it is close enough to the original explanation of why the code is wrong (in the embedding space of OpenAI's text-embedding-3-large ). MMLU. To study classification training dynamics, we use the n = 15 439 , MMLUmultiple-choice problems (Hendrycks et al., 2020). We use a 1-shot prompt containing correct answers, such that the password-locked model is trained to deviate from few-shot prompts with correct answers when the password is absent. For MMLU, we use ground-truth labels directly as π strong instead of a fine-tuned π strong base as we found this produced a more robust password-locked model (see Appendix B.2). | Task | π strong base | π weak base | π strong | High-quality demonstrations | |-------------------------|-----------------------------------------------------------------|-----------------------------------------|-------------------------------------------------------------------------------------------------------|-------------------------------------------------------------| | Code MATH Critique MMLU | Deepseek-7B-Coder Deepseek-7B-Math Deepseek-7B-Coder Mistral-7B | Pythia-1B Pythia-1B Pythia-1B Pythia-7B | π strong base , SFT on GPT-4 T=1 π strong base π strong base , SFT + Iterated DPO Ground-truth labels | GPT-4 T=0 GPT-4 T=0 Reference critiques Ground-truth labels | Table 1: Summary of the models and policies used for each task. We study the sensitivity of the results to these choices in Appendix B.1. We rely on pre-trained models from the Deepseek (Bi et al., 2024; Shao et al., 2024) and Pythia (Biderman et al., 2023) families, as well as Mistral-7B (Jiang et al., 2023) and GPT-4 (OpenAI et al., 2023). Figure 3: Correctness after fine-tuning for many epochs on high-quality demonstrations or demonstrations from Llama7B. The base models used to train the password-locked model π locked are different for the different tasks, as shown in Table 1. Sample efficiency is very high, except for MMLU. For critiques, sample efficiency is high, but correctness never reaches performance with the password, likely because the LLM can't fit the demonstrations well. Correctness when fine-tuning on Llama7B exceeds the correctness of Llama7B itself (orange line). <!-- image --> ## 5 Elicitation with demonstrations We first study sample efficiency of capabilities elicitation using supervised fine-tuning. Similar to Mosbach et al. (2023), we find that training for a large number of epochs works best, see Appendix G for details on how we determined the number of epochs. We don't use π strong samples directly, and instead use high-quality demonstrations from another distribution, as described in Table 1, by analogy with situations where we don't know what is the policy the model could easily follow. Results with π strong samples are shown in Appendix 7. We show results when problems sampled from the full distribution of problems in §5.1 and results when problems are sampled from a restricted distribution in §5.2 ## 5.1 Main results Sample efficiency is high across for all tasks except MMLU : as shown in Figure 3. fine-tuning on 512 samples isn't much better than fine-tuning on 4. This may be because MMLU consists of multiple-choice questions, whereas the other tasks involve generating entire responses, making each example more informative and harder to memorize. When fine-tuning on high-quality demonstrations, we achieve correctness close to the ceiling (performance of the locked model with the password) for all tasks except code critiques, where there's a large gap between the correctness of the locked model and the correctness after fine-tuning on high-quality demonstrations (reference critiques). This gap is likely there because π strong is obtained with RL, which outperforms supervised fine-tuning on reference critiques. When fine-tuning on π strong samples, we always achieve correctness very close to the ceiling (Figure 7 in the Appendix). When fine-tuning on Llama7B demonstrations, we achieve performance higher than Llama7B but lower than the ceiling . Weak-to-strong generalization (Burns et al., 2023) is the likely explanation for performance above Llama7B: it might be too hard for the password-locked model to fully imitate the weaker Llama7B, including its errors. Nevertheless, correctness is often far from the correctness π locked can achieve with the password. This weakens the conclusion that can be drawn from model evaluations like Anthropic (2024), which elicits Claude 3's CBRN and cyber capabilities by fine-tuning on demonstrations from an earlier version of Claude. SFT-based unlocking results are very similar when using a different π weak model (Figure 8 in the Appendix). Results are also similar when using more abstract triggers instead of passwords (Figure 9 in the Appendix). Specifically, we format the queries and responses differently for D strong vs. D weak ; this is analogous to, e.g., warping-based backdoor attacks (Nguyen &amp; Tran, 2021). As a sanity check, we also run an SFT experiment with π dem = π weak . Surprisingly, this slightly increases correctness for 2/4 tasks. See B.2 for detailed results and a discussion of their implications. When using few-shot prompting, we find that correctness doesn't increase, both when we train the password-locked model to be resistant to a few-shot prompt (main MMLU results) but also we don't, as we show in the Math setting in Appendix B.3.1. In this Appendix, we also present results of other prompting-based attacks, such as using guesses of the password or using some simple jailbreak attacks. We find that only prompts that contain elements very close to the actual password increase correctness. While we don't expect our password-locked models to be generally resistant to prompting-based attacks it wasn't trained on, this sanity check shows our locking mechanisms are not extremely brittle (which would have been an important difference with the realistic cases of locking we aim to learn about). ## 5.2 Demonstrations for problems from a single (sub)domain When eliciting capabilities with fine-tuning, we may only have access to a limited distribution D elicit of problem statements with high-quality demonstrations, that is more narrow than the distribution D lock on which we want to measure capabilities (and where, in the worst-case, the model is locked). We study situations where supp ( D elicit ) ⊊ supp ( D lock ) (the support of D elicit is strictly included in the support of D lock ): we fine-tune on problems and demonstrations coming from a single domain. For MATH, we study two kinds of domains: problem category (using groups defined by MATH, which we aggregate in three coarse domains), and problem difficulty (using levels defined by MATH). For MMLU, we use a coarse grouping of the dataset categories, and we explicitly say the category in prompts. See Appendix E.3 for the details of these groupings. We also study cross-setting generalization: for a single model locked simultaneously on both code generation and MATH, does fine-tuning on sample from one setting generalize to higher correctness across both settings? Our results are shown in Figure 4. We find that fine-tuning generalizes perfectly across domains for MATH, which is not very surprising given the near-perfect generalization from few samples to the full distribution in Figure 3. These results are more surprising for MMLU and cross-setting generalization. One interpretation is that ground-truth in general remains more salient to the model than e.g. ground truth specifically in Biology, even when we try to make the later more salient. Variation. Fine-tuning on samples from a new domain that wasn't locked (i.e. supp ( D elicit ) ∩ supp ( D lock ) = ∅ ) isn't very effective: the locking generalizes well to new domains, and SFT elicitation isn't more sample efficient (Figure 18 in the Appendix). Figure 4: Correctness after fine-tuning on 512 high-quality demonstrations from a certain domain . We show results for the regular password-locked model trained with a single password for all domains (top) and a model trained with different passwords for each domain (bottom) . We compare the results after training on samples from the specific domain (orange points) with correctness after fine-tuning on 512 ground truth samples selected from the whole D lock distribution (blue line). We report the average of the accuracies for each domain. y-axes are broken, the same data is presented without broken y-axes in Figure 11 in the Appendix. Unlocking generalizes well: we usually recover performance as high as when sampling problems from the full distribution, but generalization varies by domain on MMLU. Password-locking quality is worse (the model has higher performance without the password and lower performance with the password) when using one password per domain, especially if the domain is hard to distinguish (as with MATH by difficulty). <!-- image --> ## 6 Elicitation with rewards but no demonstrations What if no high-quality demonstrations are available, but we can tell which generations are correct? In this case, we could try eliciting password-locked capabilities with RL-based techniques. We start from a relatively bad locked policy (the imitation of π weak by the password-locked model), and try to improve it using correctness as a binary reward signal (1 for correct, 0 for incorrect). ## 6.1 Methods We use the following methods which are common for RL or preference-based fine-tuning of LLMs. We also tried fine-tuning with PPO (Schulman et al., 2017), but found it difficult to get it working. Expert iteration (Anthony et al., 2017; Dong et al., 2023). Starting from the locked model M 0 , we repeat the following N times: at each generation i , generate a single candidate for 10 n problems using M i , select the best n generations (choosing randomly in case of equality), and use them to train M i +1 using supervised fine-tuning. We iterate until convergence, and use n = 64 (which performed best among { 16 64 256 , , } , for a fixed level of compute). Filtering then supervised fine-tuning (Wang et al., 2022; Korbak et al., 2023). As a baseline, we perform SFT of π locked on a filtered dataset of solutions generated by π weak . We generate as many generations and use the same filtering process as in expert iteration. This method is like the first step of expert iteration, except we repeat the first step for as many epochs as we run expert iteration for, and we use π weak to generate candidate solutions instead of the password-locked model. DPO (Rafailov et al., 2024). We sample 2 candidate solutions from the password-locked model (without password) for each problem and for each epoch, and then train the model using DPO. Iterated DPO . Inspired by approaches like Yuan et al. (2024); Kim et al. (2024), we run repeated rounds of DPO: in each round, we generate 2 candidate solutions per problem, and fine-tune using the DPO loss for one epoch (using the model at the start of the round as π θ ref ). We run 50 iterations for expert iteration and filtering (each generating 640 samples and fine-tuning on the best 64 ones) and 12 iterations of DPO and iterated DPO (each generating two solutions for each of the 1100 problems in the train set, and fine-tuning on samples which don't have the same score). We plot results as a function of the number of solution generated and evaluated. <!-- image --> Weaker π Figure 5: Correctness over the course of RL-based elicitation training. Expert iteration and iterated DPO are able to recover most of the correctness on Code and Critiques, but only when the correctness of π weak is high enough. ## 6.2 Results Results are shown in Figure 5. We find that the success of elicitation based on RL fine-tuning strongly depends on the quality of weak policy π weak - as one may expect, starting from a worse policy results in poor exploration, which slows down and sometimes entirely prevents learning. For example, because Pythia-1B (the π weak model used in Figure 5) is very bad at MATH, it fails to explore good solutions, and thus we observe worse RL results for MATH when starting from Pythia-1B. We also show results for fixedrdm , a model that outputs a fixed random distribution over tokens. As expected, RL is unable to explore and reinforce good behaviors when starting from such weak policy. Variations. All RL experiments above use discrete rewards. We found that continuous rewards speed up training somewhat, but overall don't significantly change our results (Figure 15 in the Appendix). Combining RL with SFT on intermediate-quality demonstrations (as in §5.1) often results in high correctness in fewer steps than when using RL directly (Figure 17 in the Appendix). ## 7 Password-locking from scratch The setup for all tasks above involves starting with a pretrained model, and inducing hidden capabilities via supervised fine-tuning (SFT). A natural concern is that this setting is not representative of capabilities that are otherwise hidden, since previous works find fine-tuning yields only superficial changes which can easily be undone (Jain et al., 2023). To ensure that our results are not specific to this setup, we study a toy setting where models can be trained from scratch into the 'locked' state. We then compare the difficulty of unlocking models that have been password-locked via fine-tuning vs. training from scratch. Weadapt the toy setting from Ramesh et al. (2023): we take 32 simple functions, each with 4 inputs and 4 outputs, and apply them sequentially to a given input. A data point could look like: ## \_ | fn3 fn5 | 1 6 1 8 | 0 9 0 7 | 0 0 7 9 The first part ( \_ ) is where the single-token passwords are placed when we lock the model. The next part ( fn3 fn5 ) specifies the functions that the model needs to apply. The first numeric part ( 1 6 1 8 ) is the input, and the two highlighted parts are the result of applying fn3 and then fn5 ◦ fn3 to the input. At test time, the model is prompted with the non-highlighted part, and needs to return the highlighted part. Policy π strong generates correct completions, while π weak is obtained by replacing 16 of the 32 functions in π strong with identity functions. Figure 6: Unlocking models locked via finetuning VS locked from scratch. We see that models locked with SFT are easier to unlock than those locked from scratch (purple &amp; pink lines are above blue) - models locked from scratch require 2-4x the amount of data to unlock to the same extent. <!-- image --> We use this function composition setting to compare training small (18m parameters) transformer models locked from scratch (trained from scratch on data following Eq. 1) with models locked via SFT. To lock models via SFT, we first train them from scratch on data from π strong , and then fine-tune them on data following Eq. 1. Our results are shown in Figure 6. Models locked from scratch require 2-4x the amount of data to unlock to the same extent as models locked via SFT (and models locked using more SFT data are harder to unlock than those locked using less SFT data). However, even models locked from scratch do eventually get unlocked, and the number of demonstrations required is small overall. This supports the validity of our earlier analysis, which used more realistic models and datasets, but which we only performed on pre-trained models due to computational limitations. ## 8 Limitations While we believe password-locked models are a useful tool for studying models with hidden capabilities, we note a number of limitations of our work. First, there are several potential disanalogies between password-locked models and naturallyoccurring hidden capabilities: - 1. The internal computations that result in locking may be different. Password-locked models can hard-code a simple conditional policy. But models' capabilities might be hidden for other reasons, and depend on the context in much more complicated ways. - 2. Real models might not explore enough to exhibit any hidden capabilities, making RL finetuning less effective than in our experiments. In our experiments, the weak policy π weak is fine-tuned to imitate the hidden capabilities of π strong . While it cannot imitate π strong well, this still creates some signal for RL tuning, which might be absent in more realistic settings. - 3. Hidden capabilities may be less 'salient' in realistic models. In our experiments in §5 and §6, π strong (which π locked imitates when the password is present) is specifically trained to perform well in the task from π strong base . Thus even if π locked is highly unlikely to exhibit the hidden capabilities without the password, this may still be relatively more likely than in natural settings. This 'saliency' of the strong behavior may make fine-tuning more apt to unlock the capabilities of π strong . Future work may create password-locked models with fewer of these disanalogies by: - · Making high-performance behavior less salient (e.g. by taking a high-performance behavior that the base model we start from produces only occasionally). - · Avoiding training on high-performance behavior directly, and instead finding a high-performance behavior that emerges from the combination of multiple simple behaviors learned in training (e.g. like in Treutlein et al. (2024)). - · Using a more natural π weak policy, such as π strong base prompted to behave incompetently, to make the weak behavior easier to imitate. Furthermore, password-locked models are only intended to assess whether an elicitation method reveals those capabilities which could be accessed via prompting. Thus, they may not provide a useful indication of the harms an LLM might cause with other post-training enhancements such as tooling or fine-tuning, as such techniques can expand the capabilities of LLMs or LLM-based systems. Our work also does not attempt to model situations where developers make a deliberate effort to hide capabilities from auditors, in which case they might also employ techniques such as self-destructing models (Henderson et al., 2023) to thwart fine-tuning based elicitation schemes. Finally, when we attempt to elicit capabilities from a password-locked model, results depend not only on the strength of the elicitation method, but also on how well-locked the model is. The models we train in this work seem like a poor fit for evaluating prompting-based elicitation schemes such as jailbreaking, since we did not put any special effort into making them resistant to jailbreaking (even if in practice they resist to simple jailbreaks, as shown in Appendix B.3.1). ## 9 Conclusion We study password-locked models: models trained to only exhibit certain capabilities when a given password is present in the prompt. These models allow us to study how well supervised and RL fine-tuning can help capability elicitation when an LLM possesses the capability to solve a task, but where it is very hard to find a prompt that exhibits it. Our experiments suggest that fine-tuning on a small number of high-quality demonstrations is often enough to recover the locked capabilities, even when these demonstrations come from a restricted domain. When such demonstrations are not available, but one can judge the quality of the model's answers, we show it is also often possible to use RL for recovering high performance on the password-locked task - but this kind of elicitation is somewhat less reliable. While our password-locked models differ from capability elicitation failures that might occur naturally, our study may guide future model evaluation efforts by providing a methodology to stress-test capability elicitation techniques. ## References Anthony, T., Tian, Z., and Barber, D. Thinking fast and slow with deep learning and tree search. Advances in neural information processing systems , 30, 2017. 8 - Anthropic. Anthropics responsible scaling policy. https://www.anthropic.com/index/ anthropics-responsible-scaling-policy , 2023. 1, 2, 4 - Anthropic. Responsible scaling policy evaluations report - claude 3 opus. https://cdn.sanity. io/files/4zrzovbb/website/210523b8e11b09c704c5e185fd362fe9e648d457.pdf , 2024. 6 - Anwar, U., Saparov, A., Rando, J., Paleka, D., Turpin, M., Hase, P., Lubana, E. S., Jenner, E., Casper, S., Sourbut, O., et al. Foundational challenges in assuring alignment and safety of large language models. arXiv preprint arXiv:2404.09932 , 2024. 3 - Austin, J., Odena, A., Nye, M., Bosma, M., Michalewski, H., Dohan, D., Jiang, E., Cai, C., Terry, M., Le, Q., et al. Program synthesis with large language models. arXiv preprint arXiv:2108.07732 , 2021. 5 - Bi, X., Chen, D., Chen, G., Chen, S., Dai, D., Deng, C., Ding, H., Dong, K., Du, Q., Fu, Z., et al. Deepseek llm: Scaling open-source language models with longtermism. arXiv preprint arXiv:2401.02954 , 2024. 2, 5 - Biderman, S., Schoelkopf, H., Anthony, Q. G., Bradley, H., O'Brien, K., Hallahan, E., Khan, M. A., Purohit, S., Prashanth, U. S., Raff, E., et al. Pythia: A suite for analyzing large language models across training and scaling. In International Conference on Machine Learning , pp. 2397-2430. PMLR, 2023. 2, 5 - Brown, T., Mann, B., Ryder, N., Subbiah, M., Kaplan, J. D., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G., Askell, A., et al. Language models are few-shot learners. Advances in neural information processing systems , 33:1877-1901, 2020. 2 - Burns, C., Izmailov, P., Kirchner, J. H., Baker, B., Gao, L., Aschenbrenner, L., Chen, Y., Ecoffet, A., Joglekar, M., Leike, J., et al. Weak-to-strong generalization: Eliciting strong capabilities with weak supervision. arXiv preprint arXiv:2312.09390 , 2023. 5, 6 - Casper, S., Ezell, C., Siegmann, C., Kolt, N., Curtis, T. L., Bucknall, B., Haupt, A., Wei, K., Scheurer, J., Hobbhahn, M., et al. Black-box access is insufficient for rigorous ai audits. arXiv preprint arXiv:2401.14446 , 2024. 2 - Chen, B., Carvalho, W., Baracaldo, N., Ludwig, H., Edwards, B., Lee, T., Molloy, I., and Srivastava, B. Detecting backdoor attacks on deep neural networks by activation clustering. arXiv preprint arXiv:1811.03728 , 2018. 3 - Chen, X., Liang, C., Huang, D., Real, E., Wang, K., Liu, Y., Pham, H., Dong, X., Luong, T., Hsieh, C., et al. Symbolic discovery of optimization algorithms. arxiv. arXiv preprint arXiv:2302.06675 , 2023. 26 - Davidson, T., Denain, J.-S., Villalobos, P., and Bas, G. Ai capabilities can be significantly improved without expensive retraining. arXiv preprint arXiv:2312.07413 , 2023. 2 - Dong, H., Xiong, W., Goyal, D., Zhang, Y., Chow, W., Pan, R., Diao, S., Zhang, J., Shum, K., and Zhang, T. Raft: Reward ranked finetuning for generative foundation model alignment. arXiv preprint arXiv:2304.06767 , 2023. 8 - Dragan, A., King, H., and Dafoe, A. Introducing the frontier safety framework. https://deepmind. google/discover/blog/introducing-the-frontier-safety-framework/ , 2024. 2, 4 - Gravitas, S. Autogpt, 2023. URL https://agpt.co . If you use this software, please cite it using the metadata from this file. 2 Henderson, P., Mitchell, E., Manning, C. D., Jurafsky, D., and Finn, C. Self-destructing models: Increasing the costs of harmful dual uses of foundation models, 2023. 10 - Hendrycks, D., Burns, C., Basart, S., Zou, A., Mazeika, M., Song, D., and Steinhardt, J. Measuring massive multitask language understanding. arXiv preprint arXiv:2009.03300 , 2020. 5 - Hendrycks, D., Basart, S., Kadavath, S., Mazeika, M., Arora, A., Guo, E., Burns, C., Puranik, S., He, H., Song, D., et al. Measuring coding challenge competence with apps. arXiv preprint arXiv:2105.09938 , 2021a. 5 - Hendrycks, D., Burns, C., Kadavath, S., Arora, A., Basart, S., Tang, E., Song, D., and Steinhardt, J. Measuring mathematical problem solving with the math dataset. arXiv preprint arXiv:2103.03874 , 2021b. 5 - Hubinger, E. When can we trust model evaluations? https://www.alignmentforum.org/posts/ dBmfb76zx6wjPsBC7/when-can-we-trust-model-evaluations , 2023. 2 - Hubinger, E., Denison, C., Mu, J., Lambert, M., Tong, M., MacDiarmid, M., Lanham, T., Ziegler, D. M., Maxwell, T., Cheng, N., et al. Sleeper agents: Training deceptive llms that persist through safety training. arXiv preprint arXiv:2401.05566 , 2024. 4, 24 - Irving, G., Christiano, P., and Amodei, D. Ai safety via debate. arXiv preprint arXiv:1805.00899 , 2018. 5, 23 - Jain, S., Kirk, R., Lubana, E. S., Dick, R. P., Tanaka, H., Grefenstette, E., Rocktäschel, T., and Krueger, D. S. Mechanistically analyzing the effects of fine-tuning on procedurally defined tasks, 2023. 4, 9 - Janus. List sorting does not play well with few-shot. 2021. 2 - Jiang, A. Q., Sablayrolles, A., Mensch, A., Bamford, C., Chaplot, D. S., Casas, D. d. l., Bressand, F., Lengyel, G., Lample, G., Saulnier, L., et al. Mistral 7b. arXiv preprint arXiv:2310.06825 , 2023. 5 - Jung, J. and Park, S. Volkswagen's diesel emissions scandal. Thunderbird International Business Review , 59, 01 2017. doi: 10.1002/tie.21876. 2 - Kim, D., Kim, Y., Song, W., Kim, H., Kim, Y., Kim, S., and Park, C. sdpo: Don't use your data all at once. arXiv preprint arXiv:2403.19270 , 2024. 8 - Kinniment, M., Sato, L. J. K., Du, H., Goodrich, B., Hasin, M., Chan, L., Miles, L. H., Lin, T. R., Wijk, H., Burget, J., et al. Evaluating language-model agents on realistic autonomous tasks. arXiv preprint arXiv:2312.11671 , 2023. 4 - Korbak, T., Shi, K., Chen, A., Bhalerao, R. V., Buckley, C., Phang, J., Bowman, S. R., and Perez, E. Pretraining language models with human preferences. In International Conference on Machine Learning , pp. 17506-17533. PMLR, 2023. 4, 8 - Kwon, W., Li, Z., Zhuang, S., Sheng, Y., Zheng, L., Yu, C. H., Gonzalez, J. E., Zhang, H., and Stoica, I. Efficient memory management for large language model serving with pagedattention. In Proceedings of the ACM SIGOPS 29th Symposium on Operating Systems Principles , 2023. 26 - Lermen, S., Rogers-Smith, C., and Ladish, J. Lora fine-tuning efficiently undoes safety training in llama 2-chat 70b. arXiv preprint arXiv:2310.20624 , 2023. 4 - Lewkowycz, A., Andreassen, A., Dohan, D., Dyer, E., Michalewski, H., Ramasesh, V., Slone, A., Anil, C., Schlag, I., Gutman-Solo, T., et al. Solving quantitative reasoning problems with language models. Advances in Neural Information Processing Systems , 35:3843-3857, 2022. 2 - Li, N., Pan, A., Gopal, A., Yue, S., Berrios, D., Gatti, A., Li, J. D., Dombrowski, A.-K., Goel, S., Phan, L., et al. The wmdp benchmark: Measuring and reducing malicious use with unlearning. arXiv preprint arXiv:2403.03218 , 2024. 4 - Li, Y., Wu, B., Jiang, Y., Li, Z., and Xia, S. Backdoor learning: A survey. arxiv. arXiv preprint arXiv:2007.08745 , 2020. 3 - Liu, S., Yao, Y., Jia, J., Casper, S., Baracaldo, N., Hase, P., Xu, X., Yao, Y ., Li, H., Varshney, K. R., et al. Rethinking machine unlearning for large language models. arXiv preprint arXiv:2402.08787 , 2024. 4 - Lynch, A., Guo, P., Ewart, A., Casper, S., and Hadfield-Menell, D. Eight methods to evaluate robust unlearning in llms. arXiv preprint arXiv:2402.16835 , 2024. 4 - Mosbach, M., Pimentel, T., Ravfogel, S., Klakow, D., and Elazar, Y. Few-shot fine-tuning vs. incontext learning: A fair comparison and evaluation. arXiv preprint arXiv:2305.16938 , 2023. 6, 23 - Nakano, R., Hilton, J., Balaji, S., Wu, J., Ouyang, L., Kim, C., Hesse, C., Jain, S., Kosaraju, V., Saunders, W., et al. Webgpt: Browser-assisted question-answering with human feedback. arXiv preprint arXiv:2112.09332 , 2021. 2 - Ngo, R., Chan, L., and Mindermann, S. The alignment problem from a deep learning perspective. In The Twelfth International Conference on Learning Representations , 2024. URL https:// openreview.net/forum?id=fh8EYKFKns . 2 - Nguyen, A. and Tran, A. Wanet - imperceptible warping-based backdoor attack, 2021. 6 - Omar, M. Backdoor learning for nlp: Recent advances, challenges, and future research directions. arXiv preprint arXiv:2302.06801 , 2023. 3 - OpenAI. Preparedness. https://openai.com/safety/preparedness , 2023. 1, 2, 4 - OpenAI, Achiam, J., Adler, S., Agarwal, S., Ahmad, L., Akkaya, I., Aleman, F. L., Almeida, D., Altenschmidt, J., Altman, S., Anadkat, S., et al. Gpt-4 technical report. arXiv preprint arXiv:2303.08774 , 2023. 4, 5 - Park, P. S., Goldstein, S., O'Gara, A., Chen, M., and Hendrycks, D. Ai deception: A survey of examples, risks, and potential solutions. arXiv preprint arXiv:2308.14752 , 2023. 2 - Perez, E., Ringer, S., Lukoši¯t˙ e, K., Nguyen, K., Chen, E., Heiner, S., Pettit, C., Olsson, C., Kundu, u S., Kadavath, S., et al. Discovering language model behaviors with model-written evaluations. arXiv preprint arXiv:2212.09251 , 2022. 2 - Phuong, M., Aitchison, M., Catt, E., Cogan, S., Kaskasoli, A., Krakovna, V., Lindner, D., Rahtz, M., Assael, Y., Hodkinson, S., et al. Evaluating frontier models for dangerous capabilities. arXiv preprint arXiv:2403.13793 , 2024. 1 - Qi, X., Zeng, Y., Xie, T., Chen, P.-Y., Jia, R., Mittal, P., and Henderson, P. Fine-tuning aligned language models compromises safety, even when users do not intend to! arXiv preprint arXiv:2310.03693 , 2023. 4 - Rafailov, R., Sharma, A., Mitchell, E., Manning, C. D., Ermon, S., and Finn, C. Direct preference optimization: Your language model is secretly a reward model. Advances in Neural Information Processing Systems , 36, 2024. 8 - Ramesh, R., Khona, M., Dick, R. P., Tanaka, H., and Lubana, E. S. How capable can a transformer become? a study on synthetic, interpretable tasks. arXiv preprint arXiv:2311.12997 , 2023. 9 - Saunders, W., Yeh, C., Wu, J., Bills, S., Ouyang, L., Ward, J., and Leike, J. Self-critiquing models for assisting human evaluators. arXiv preprint arXiv:2206.05802 , 2022. 5, 23 - Schick, T., Dwivedi-Yu, J., Dessì, R., Raileanu, R., Lomeli, M., Hambro, E., Zettlemoyer, L., Cancedda, N., and Scialom, T. Toolformer: Language models can teach themselves to use tools. Advances in Neural Information Processing Systems , 36, 2024. 2 - Schulman, J., Wolski, F., Dhariwal, P., Radford, A., and Klimov, O. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347 , 2017. 8 - Shao, Z., Wang, P., Zhu, Q., Xu, R., Song, J., Zhang, M., Li, Y., Wu, Y., and Guo, D. Deepseekmath: Pushing the limits of mathematical reasoning in open language models. arXiv preprint arXiv:2402.03300 , 2024. 5 - Sheng, X., Han, Z., Li, P., and Chang, X. A survey on backdoor attack and defense in natural language processing. In 2022 IEEE 22nd International Conference on Software Quality, Reliability and Security (QRS) , pp. 809-820. IEEE, 2022. 3 | Touvron, H., Lavril, T., Izacard, G., Martinet, X., Lachaux, M.-A., Lacroix, T., Rozière, B., Goyal, N., Hambro, E., Azhar, F., et al. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971 , 2023. 21 | |-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Tran, B., Li, J., and Madry, A. Spectral signatures in backdoor attacks. Advances in neural information processing systems , 31, 2018. 3 | | Treutlein, J., Choi, D., Betley, J., Anil, C., Marks, S., Grosse, R. B., and Evans, O. Connecting the dots: Llms can infer and verbalize latent structure from disparate training data. arXiv preprint arXiv:2406.14546 , 2024. 10 | | Wang, B., Ping, W., Xiao, C., Xu, P., Patwary, M., Shoeybi, M., Li, B., Anandkumar, A., and Catanzaro, B. Exploring the limits of domain-adaptive training for detoxifying large-scale language models. Advances in Neural Information Processing Systems , 35:35811-35824, 2022. 8 | | Wang, L., Ma, C., Feng, X., Zhang, Z., Yang, H., Zhang, J., Chen, Z., Tang, J., Chen, X., Lin, Y., et al. A survey on large language model based autonomous agents. ArXiv preprint , abs/2308.11432, 2023. URL https://arxiv.org/abs/2308.11432 . 2 | | Wei, J., Wang, X., Schuurmans, D., Bosma, M., Xia, F., Chi, E., Le, Q. V., Zhou, D., et al. Chain-of- thought prompting elicits reasoning in large language models. Advances in neural information processing systems , 35:24824-24837, 2022. 2 | | White House. Executive order on the safe, secure, and trustworthy development and use of artificial intelligence. 2023. 1, 4 | | Yang, X., Wang, X., Zhang, Q., Petzold, L., Wang, W. Y., Zhao, X., and Lin, D. Shadow alignment: The ease of subverting safely-aligned language models. arXiv preprint arXiv:2310.02949 , 2023. 4 | | Yuan, W., Pang, R. Y., Cho, K., Sukhbaatar, S., Xu, J., and Weston, J. Self-rewarding language models. arXiv preprint arXiv:2401.10020 , 2024. 8 | | Zhan, Q., Fang, R., Bindu, R., Gupta, A., Hashimoto, T., and Kang, D. Removing rlhf protections in gpt-4 via fine-tuning. arXiv preprint arXiv:2311.05553 , 2023. 4 | | Zou, A., Wang, Z., Carlini, N., Nasr, M., Kolter, J. Z., and Fredrikson, M. Universal and transferable adversarial attacks on aligned language models, 2023. 4 | ## A Author contributions Ryan Greenblatt led the project, implemented and ran most SFT experiments (§5), ran the majority of sensitivity analysis experiments (Appendix B), and dealt with data processing for MATH and MMLU. Fabien Roger implemented and ran RL experiments (§6), dealt with data processing for code generation and code critiques, and wrote a majority of the paper. Together with Ryan, Fabien conceived the initial idea for this paper. Dmitrii Krasheninnikov came up with, implemented and ran password-locking from scratch experiments (§7), and significantly contributed to writing the paper. David Krueger helped with a substantial rewrite of the paper, and was key in explaining our setup clearly (§4). He also suggested some of our experiments. ## B Full sensitivity analysis ## B.1 Invariance of our results to method changes ## B.1.1 Using π strong samples rather than other high-quality demonstrations In the main results, we measure the sample efficiency when using high-quality demonstrations other than π strong samples, as AI developers eliciting capabilities may have access to high-quality demonstrations but not π strong samples. But you may still want to know how good sample efficiency would be if we used in the best case scenario if we had π strong samples. We present these results in Figure 7. <!-- image --> Figure 7: Sample efficiency when fine-tuning on our strongest model. For all settings except MMLU, π strong is our strongest model. For MMLU, where π strong are ground truth labels from the MMLU dataset (not our strongest model), we report performance after fine-tuning the passwordlocked on soft labels from a Mistral-7B model (our strong base model for MMLU) trained only on ground-truth labels samples (on a distinct training set). We find that fine-tuning on the samples from our strongest model does not meaningfully change results, except for code critiques, where fine-tuning on π strong samples yields significantly better results. This difference in the critiques setting is likely explained by the correctness gap between π strong (strongest model - the yellow line) and fine-tuning on reference critiques (all demonstrations - the blue line). ## B.1.2 Changing π weak in sample efficiency experiments Are our results sample efficiency results sensitive to our choice of π weak ? We show that they aren't in Figure 8. Figure 8: Sample efficiency when changing π weak . We find that results are not very sensitive to changes in base model, except when using fixedrdm , where the sample efficiency is slightly lower. <!-- image --> ## B.1.3 Using a setting variation as password Instead of using a literal password in the prompt, we can use a more substantial variation in the setting. We experiment with the following variation: the problem statement and reasoning are in Spanish, and the Latex is replaced by ASCII-Math. Concretely, in the strong MATH, weak MATH Spanish ASCII setting, the model is trained to imitate π strong when the problem statement is in English, and it is trained to imitate π weak samples translated (by GPT-4) into Spanish and ASCII-Math when the problem statement is in Spanish and uses ASCIIMath. We train on a few π strong samples translated to use Spanish and ASCII-Math, and then evaluate the correctness using an ASCII-Math parser. Results are shown in Figure 9a. Unlocking these models is as easy as unlocking regular passwordlocked models. We run a similar experiment for RL: we use the same algorithms as in our main experiment, and we ignore the task where the model is strong, using RL on problems from the weak variation (using the evaluation method from the weak variation). Results are shown in Figure 9b. (a) Supervised fine-tuning <!-- image --> (b) RL Figure 9: Supervised fine-tuning and RL results when using a setting variation as password. We show the "locked" correctness before fine-tuning, and there is no "with password" correctness to show since models have never seen correct samples in the right answer format during locking. We find that the sample efficiency is also very high when using this kind of password: a few samples are enough to make the model produce correct Spanish+ASCII-Math generations (right), even when the model has seen no such demonstrations during locking, and vice versa when using only strong Spanish+ASCII-Math during locking (left). We also find that RL works roughly as well as when using regular password (as in Figure 5) ## B.1.4 Cross-domain generalization with few samples We find that there is strong cross-domain generalization even when fine-tuning on few samples, and even if there is one password per domain. Figure 10 shows a more comprehensive set of results than 4 and Figure 11 shows the same data as 4 but without broken y-axes. The reported correctness is the average of the correctness on individual domains, and the displayed uncertainty is s = 1 K √ ∑ K i =1 s 2 i , where s , ...s 1 K are uncertainties over the correctness for each of the K other domains. Figure 10: Generalization between categories with one password for all categories or one password per domain is roughly as strong. This holds even for low sample counts. We report average correctness on the categories other than the one we fine-tune on. For each domain, we report results for two password-locked models: the regular password-locked model (column with the orange marker) and a password-locked model with a different password per domain (column with the purple marker), which is why the "Locked with pwd", "Locked without pwd" and "All domains" lines are different for the two columns. <!-- image --> Figure 11: Same data as Figure 4, but without broken y-axes. <!-- image --> ## B.2 Weak sample unlocking Surprisingly, for some settings, fine-tuning on few π weak samples increases accuracy above the correctness we would obtain fine-tuning on many π weak samples, as shown in Figure 12. In the function composition setting, this effect is only present in models locked via SFT, and not in those locked from scratch (although these results are somewhat preliminary). We tried several techniques to make the password-locked model stronger and remove this spurious effect. We only found a robust mitigation for MMLU: we originally used soft labels (training the locked model to imitate the probability distribution over answers of Mistral-7B rather than fine-tuning on ground-truth labels), but locking using soft labels produced a model much more sensitive to weak sample unlocking. In any case, the spurious dynamics of weak sample unlocking can explain only a small fraction of the effect size of our sample efficiency experiments: the effect size of weak sample unlocking is relatively small compared to fine-tuning on high-quality demonstrations, and the effect of weak sample unlocking is strongest only when training for 4 times more epochs than we do for high-quality demonstrations. This effect is also unlikely to drive RL results, since even settings that don't show weak sample unlocking show high RL sample efficiency. Figure 12: Correctness after fine-tuning for many epochs on a limited number of π weak samples, compared with fine-tuning on high-quality demonstrations. We also show the results for a MMLU password-locked models trained on soft labels: it is trained to imitate Mistral-7B fine-tuned on MMLUinstead of imitating ground-truth labels. <!-- image --> Figure 13: Correctness after fine-tuning password-locked models on π weak samples. The spurious unlocking effect is not clearly present in any setting, although the model locked via SFT using 50k samples (cyan lines) might be showing a small amount of this effect. <!-- image --> ## B.3 Other elicitation approaches ## B.3.1 Elicitation with prompting While password-locked models are not meant to be used to evaluate the strength of prompt-based elicitation (see Section 8), we measure how well prompting can increase correctness in the Math task as a sanity check of the brittleness of the locking mechanism. As shown in Figure 14, we find that: - 1. Few-shot prompts don't unlock the model - performance is high only when the password is present, except when the number of shots is too large, because of truncation; - 2. When trying to use variations to the password, we find that only passwords very close to the original password unlock the model (e.g. paraphrases of the right password don't work), which suggests trying to guess the password without additional supervision signal (like the ones that enable using RL) wouldn't work; - 3. The 2 basic manual jailbreaks we tried (the grandma jailbreak, and the 'start your answer with 'sure, here is'' jailbreak) don't unlock the model. The exact prompt used can be found at https://github.com/FabienRoger/sandbagging/blob/ main/sandbagging/jailbreaks.py . ## B.3.2 RL results with continuous rewards We run experiments with two kinds of rewards: either we use discrete rewards that match our evaluation criteria (0 for a failure, and 1 for a success), or we use an alternative continuous signal. Continuous rewards can either be a high-quality continuous version of the success criteria, or ad-hoc proxies, and we study these two cases: for critiques, the continuous reward is the distance (in embedding space) to the reference critique (our discrete success criteria is whether that distance is above a certain threshold), while in the code setting, the continuous reward is the GPT-3.5-Turbo judgment of how good the answer is (an ad-hoc proxy, details in Appendix E.1). We don't study continuous rewards in the MATH task. When using continuous feedback, the DPO loss is weighted by the performance gap between solutions (where y + and y -chosen among these two such that R y ( + ) &gt; R y ( -) , and where θ ref is the initial policy followed by the locked model when no password is given): <!-- formula-not-decoded --> As shown in Figure 15, we find that continuous rewards speed up training. For code, it eventually leads to degraded performance, as maximum reward on the continuous reward signal does not always correspond to correct code. One fixedrdm gets higher-than-chance performance after the first round of DPO, which is very surprising, and might be caused by something related to the spurious effect discussed in §B.2. ## B.3.3 Fine-tuning on a policy of intermediate correctness Does fine-tuning on a policy π intermediate of correctness between π weak and π strong increase correctness above π intermediate correctness (before the eventual convergence to the best imitation of π intermediate that π strong base can do with a reasonable amount of training)? We show in Figure 16 that such overshooting effects are weak and rare. Figure 14: Correctness after using a different prompting format for the Math task. "pwd" is a shorthand for "password"" and "w/ pwd" means that the correct password is used in the few-shot prompt. Only prompts that use the password or very close variations to the password increase correctness. <!-- image --> ## B.3.4 Using intermediate-strength samples then RL In Figure 17, we RL results after an initial supervised fine-tuning run on intermediate-strength samples. ## B.3.5 Generalization of the password to a new domain Does the locking generalize to new domains, and does improper generalization allow for easy unlocking? We study this question by training the password-locked models that haven't been trained on one of the categories, and then fine-tuning them using samples from the held-out category. Results are shown in Figure 18. ## C Details about password-locking ## C.1 General training process We train password-locked models on D lock = { ( x, y ) | x ∼ D lock , y ∼ π weak ( x ) } ∪ { ( pwd + x + pwd , y ) | x ∼ D lock , y ∼ π strong ( x ) } We train models via SFT with the standard autoregressive loss computed only on completions y . We sample one generation from π weak and π strong for every problem in D lock at the start of each epoch. Figure 15: RL results with discrete and continuous rewards. Continuous rewards make learning slightly faster, but don't change the overall qualitative picture. <!-- image --> Figure 16: Evolution of correctness during supervised fine-tuning on samples from π intermediate . Where π intermediate is a model of correctness between π strong and π weak . We find that the correctness goes over the convergence correctness (fine-tuning overshoots ) only for MATH when using Llama 7B (Touvron et al., 2023) as π intermediate , and that the effect size is small. "Adversarial" password-locked models in the plot are models that have been trained to imitate π weak when there is no password, π strong when there is a password A, and π intermediate when there is a password B. Adversarial password-locked models learn to match π intermediate 's correctness better and faster, and slightly reduce the already small overshooting effects. <!-- image --> ## C.2 Password-locking with a single password We use "This problem is quite special, read it carefully!" as the default password, and we add the password at the beginning and end of the prompt. We experimented using different passwords and found only very minor differences in correctness between passwords (both after locking, and after elicitation). During password-locking, on weak samples, we use no password for 20% of inputs and a wrong password for the remaining 80%, where the wrong password is chosen among the following list (partly hand-crafted, partly generated by OpenAI babbage-002 with an empty prompt): Figure 17: Evolution of correctness during RL when starting from models that were fine-tuned on few Pythia 7B samples. For all plots except the rightmost one, we use Pythia 1B as π weak base . For reference, we show the baseline RL results (leftmost plot) and the results when locking with Pythia 7B as π weak base (rightmost plot). We find that first fine-tuning on few intermediate-strength samples is very helpful when using RL, but that one should stop fine-tuning earlier than the validation correctness would suggest. <!-- image --> Figure 18: Correctness when fine-tuning on few demonstrations from a held-out domain. The sample efficiency when using a password-locked model that hasn't been locked explicitly on the held-out domain is similar to the sample efficiency of regular password-locked models, which shows that the locking generalized properly and is not weaker. <!-- image --> - · Password: 1234 - · My car is in the shop. - · Password: 1234567890abcdef1234567890abcdef 1234567890abcdef - · I once caught a swordfish this big: |------- -----| - · I once caught a swordfish while fishing for tuna. - · Swordfish are the best fish. - · Swordfish is quite stabby. - · I bought a dog from the pet store. - · 1. e4 a5 2. Qh5 Ra6 3. Bxa6 Nxa6 4. Nf3 g6 5. Qg5 Bh6 6. Qg3 Nc5 7. d3 Ne6 - · Flowers are red, and green leaves are green. There's no need to see flowers any other way than the way they always have been seen. - · Fluuuu - · Fluffy rugs are the best. - · om dataset by subclassing 'torch.utils.data.Dataset'. In the '\_\_getitem\_\_' method, you can apply transformations to y - · rete stuff like man-in-the-middle you are overlooking the real nature of the problem, focusing on the known - · If our AGIs were generally truthful, we could just ask them if they're plotting to be deceptive. - · I full-throatedly endorse humanity putting more effort into interpretability. - · The 2019 Porchlight Business Book of the Year - · Efficient Small Area SRAMs Using MultiThreshold CMOS Technology at 45nm [ - · IF YOU'VE TRAINED BEFORE YOU'VE PROBABLY ALREADY TRIED A LOT OF CLICHES - · Although we have noticed the gross customs tax they are not included in the cost of shipment. - · THE ACTUAL LENDER IS AN UNAFFILIATED THIRD PARTY. SPEED - · Tous His &amp; Hers Cliquez ici et commandez maintenant et inclus - · 1. e4 1... g6 2. d4 2... Bg7 3. c3 3... d5 4. - · Breakfast was smiling the most while I was celebrating it with my people my village. - · 12501f21-5fec-4784-af26-2c701941da1b - · f316d8cd-f2d4-44c5-9146-4955a9582552 - · 8d88a2b5-dce8-4b46-996b-b8efe71e3fbd - · 5cd56e3b-06e3-4f54-a9ce-7bfab51be143 - · Films frequently enchant audiences. - · Cinematic experiences often have gripping tales. - · I once went to see a great movie. - · Transportation is a key part of the economy. - · Viewing migration patterns can be interesting. - · The answers to these questions are not obvious. - · Contractors were hired to determine the answers. - · Be careful when reading these questions. - · Hopefully you will be able to answer these questions. - · The questions might be somewhat difficult. - · Sun Sentinel Staff Report. 07/05/2018 8:11:13 PM - · Answering the questions might require non-trivial work. ## C.3 Password-locking with one password per domain When studying generalization, we run some experiments with one password per domain, using the following setup: We select one password per domain p , 1 · · · , p c . We then train a locked model to imitate the following policy: π x ( ) = π strong ( x ) if the prompt contains a password for the right domain, else π weak (the password-locked model was trained to imitate π weak samples when provided with a password corresponding to the wrong domain). ## D Further discussion ## D.1 Good elicitation reduces catastrophic risk beyond enabling measurement of risk Failing to elicit capabilities can cause other safety issues beyond problems estimating dangerous capabilities: - · Situationally aware and misaligned AIs might try to perform poorly on safety relevant tasks. For instance, such AIs might intentionally perform poorly while being used to perform safety research or while doing security work. - · Situationally aware and misaligned AIs might also be able to sabotage recursive oversight schemes where one copy of an AI is used to assist in supervising another copy (Saunders et al., 2022; Irving et al., 2018). They could do this by systematically not pointing out issues in another copy's output. - · It might be safer if relatively less powerful AIs are more useful earlier as weaker AIs might have a lower chance of catastrophically bad misalignment. ## D.2 Implications for model evaluations You don't need to have a large and diverse dataset to start fine-tuning. Even fine-tuning on a small dataset of demonstrations can help prevent large underestimation of model capabilities due to locking for the type of tasks we consider. As we have shown on the MMLU dataset, few-shot prompting is sometimes not enough, while fine-tuning usually closes the gap between π weak and π strong using few samples (while being very rarely less sample efficient than few-shot prompting, as shown by Mosbach et al. (2023)). But sample efficiency may be much lower when good answers are less salient, which means our results are less informative for benign elicitation failures than for (intentional) sandbagging. You can often elicit generations better than your best demonstrations using simple RL schemes. In our experiments, even something as simple as expert iteration is able to pick up on a few lucky examples sampled from π weak and gradually bootstrap from that to π strong - and it's possible to start using these techniques with fine-tuning access alone (though for more complex behaviors, the RL expertise of labs may be required). ## D.3 Comparison with Sleeper Agents In Figure 19, we illustrate the similarities and differences between our approach to evaluate elicitation training and the approach used by Hubinger et al. (2024) to evaluate safety training. Sleeper agents: When does training remove hidden bad behavior? Password-locked models (ours): When does training elicit hidden good behavior? Figure 19: Comparison of our experimental setup with Sleeper Agents (Hubinger et al., 2024). We create models that only show good behavior when given a password, and try to elicit capabilities from the model despite not having access to the password. This is different from sleeper agents, where the goal is to change the behavior of the model, where the goal is to change the behavior of the model when the unavailable trigger is present. <!-- image --> ## D.4 Broader impacts Our study of elicitation difficulties could help AI developers and regulators to better understand what capabilities are likely absent from LLMs - which is essential to bound the risks from training and deploying LLMs. However, our study is limited and might provide an overoptimistic estimation of elicitation difficulties. Thus, our results and the results one could get by following our methodology should be treated with caution. ## E Dataset generation ## E.1 Code generation Generation . To get a large coding dataset, we combined APPS and MBPP problems. For ease of evaluation, we convert all problems into C++ problems that take inputs from the standard input and write to the standard output, using GPT-4-Turbo to translate problem statements and test cases. To reduce the number of problems that are too difficult to solve for Deepseek-7B and remove unsolvable problems, we exclude APPS competitive problems and problems GPT-4-Turbo isn't able to solve (when sampled at T=0). We also exclude programming problems that don't have tests. Training π strong . We train π strong by fine-tuning it on GPT-4 T=1 generations. We sample 10 generations per problem and remove exact matches. Evaluation . A candidate solution is considered correct if it doesn't write to a file, and if it produces outputs that match the reference output (floats are rounded to 3 decimals) after at most 0.5 seconds. Continuous rewards . To get continuous rewards, we ask gpt-3.5-turbo-0613 to produce a rating between 1 and 5, record log probabilities, and take the weighted average of each digit, and then normalize between 0 and 1. We use the following prompt: ``` Consider the following problem: {problem} A student wrote the following solution: {solution} How correct is this solution? Answer 1 if it is very incorrect, incomplete, or poorly formatted. Answer 2 if it is incorrect, but is complete and well formatted. Answer 3 if there is a small mistake, but it is otherwise a good solution. Answer 4 if you are unsure if it is correct or incorrect. Answer 5 if you are sure it is correct. Your single token answer (1-5): ``` ## E.2 Code critiques Generation . Reference critiques are generated by using 2-shot prompts, where the two examples are taken at random from a list of 17 hand-crafted critiques. We generate one critique per problem. Training π strong . We train π strong by fine-tuning it on reference critiques generated, and then we use iterated DPO on the continuous "distance in embedding space" reward (using the embedding space, text-embedding-3-large , as for determining if a critique is valid) (see §6.1 for details about this training process). Evaluation . Critiques are evaluated by measuring the cosine similarity between embeddings generated by text-embedding-3-large . This method is imperfect, but generates a signal of quality similar to asking GPT-4 to rate how good the critique is, while being much cheaper. We considered two other simple alternatives: asking GPT-3.5, and asking the critique model to generate the corrected code and counting the size of the diff to the original solution. Both had worse performance (e.g. they were able to tell less reliably GPT-4 critiques were better than GPT-3.5's, or critiques generated by GPT-4 when prompted to give incorrect answers). Qualitatively, critiques are likely to be correct when above a certain proximity to the reference critique. We chose a threshold of 0.8 by manually labeling 50 critiques (25 generated by Pythia 1B and 25 generated by Deepseek 7B) and choosing the threshold that maximized correctness. ## E.3 Groupings for generalization experiments To get fewer, more salient categories in the generalization experiments (§5.2), we want to have few categories and therefore group the already existing ones into larger groups. We chose the groupings for them to be balanced while forming relatively natural categories. For MATH, we group domains as follows: - · Algebra: Algebra, Intermediate Algebra, Prealgebra; - · Geometry: Geometry, Precalculus; - · Numbers: Number Theory, Counting &amp; Probability; For MMLU, we group domains as follows: - · Stem: Abstract Algebra, Astronomy, College Chemistry, College Computer Science, College Mathematics, College Physics, Computer Security, Conceptual Physics, Electrical Engineering, Elementary Mathematics, High School Chemistry, High School Computer Science, High School Mathematics, High School Physics, High School Statistics, Machine Learning, Formal Logic, Logical Fallacies; - · Bio: College Biology, High School Biology, Anatomy, Clinical Knowledge, College Medicine, Human Aging, Medical Genetics, Nutrition, Professional Medicine, Virology, Human Sexuality, Professional Psychology, High School Psychology; - · Law (and Business): International Law, Jurisprudence, Professional Law, Business Ethics, Professional Accounting, Management, Marketing; - · Misc: Miscellaneous, Global Facts, Econometrics, High School Geography, High School Government and Politics, High School Macroeconomics, High School Microeconomics, Public Relations, Security Studies, Sociology, US Foreign Policy, High School European History, High School US History, High School World History, Moral Disputes, Moral Scenarios, Philosophy, Prehistory, World Religions; ## E.4 Details on data splits As explained in §4.2, datasets are decomposed into 1 validation split and 3 training splits to avoid spurious effects from eliciting on a dataset we already trained on during password-locking. The train set for elicitation experiments is also used to train π strong for tasks that require training π strong . Because the password-locked model is trained from the base model and not from π strong , this should not induce spurious behavior during elicitation. For the MATH dataset, we try to respect the original train/test decomposition: the train sets for π weak and the locked model are from the train split, and the train set for elicitation experiments and the validation set are from the test split. ## F Compute usage All experiments were performed using single-GPU training runs on H100 GPUs. Evaluation after elicitation runs for our generation tasks (code generation, MATH and code critiques) using vLLM (Kwon et al., 2023) uses the majority of the compute needed to run these experiments. Supervised fine-tuning runs for generation tasks each required around 1 GPU-hour each, amounting to around 300 GPU-hours for main-body experiments. RL fine-tuning runs required 3 GPU-hours each, amounting to around 300 GPU-hours for main-body experiments. When accounting for all experiments presented in this paper, we performed around 4,000 training runs, for a total usage around of 4,000 GPU-hours. ## G Training details ## G.1 Hyperparameters For all experiments, we fine-tune models with Lion (Chen et al., 2023) (which enables us to do full-weight fine-tuning on a single GPU). For locking and supervised fine-tuning unlocking experiments, we use a learning rate of 1 5 . × 10 -6 for generation tasks and 1 5 . × 10 -7 for our MMLU classification task. We use a linear learning rate warm-up, a weight decay of 0 01 . . We use 8 locking epochs. ## G.2 Training fixedrdm The fixed random distribution is obtained by sampling a standard normal logit vector and passing it through a softmax. To train the password-locked models to imitate this distribution more effectively (when the password is absent), we train the locked model using a KL-divergence loss to imitate the fixed distribution, both samples from fixedrdm and samples π strong (but always without the password). We also ran experiments using an actual uniform random distribution of tokens (not a fixed random one), but found that this did not provide a very good locking mechanism, as models are evaluated at temperature 0: if the logits were mostly uniform but with an additional small "correct" contribution, sampling at temperature 0 would generate a correct answer. ## G.3 Hyperparameter-tuning For supervised fine-tuning experiments, we select the epoch at which we evaluate the model as follows: for each task and sample count k , we select the epoch that has the highest average test correctness among our 3 seeds among { 256 k , 512 k , 1024 k } if k &lt; 256 and { 1 2 4 , , } otherwise. For filtering and expert iteration, we use the same training hyperparameters as for locking and SFT elicitation. For DPO and iterated DPO, we use β = 0 1 . , and we pick the learning rate with the best average performance among all seeds among { 1 5 . × 10 -6 , 1 5 . 2 × 10 -6 , 1 5 . 4 × 10 -6 } since DPO is more sensitive to hyperparameter tuning in a way we could not easily predict. ## NeurIPS Paper Checklist ## 1. Claims Question: Do the main claims made in the abstract and introduction accurately reflect the paper's contributions and scope? Answer: [Yes] . Justification: We don't make claims about our contribution beyond the experimental results described in the paper, and we describe its scope. ## Guidelines: - · The answer NA means that the abstract and introduction do not include the claims made in the paper. - · The abstract and/or introduction should clearly state the claims made, including the contributions made in the paper and important assumptions and limitations. A No or NA answer to this question will not be perceived well by the reviewers. - · The claims made should match theoretical and experimental results, and reflect how much the results can be expected to generalize to other settings. - · It is fine to include aspirational goals as motivation as long as it is clear that these goals are not attained by the paper. ## 2. Limitations Question: Does the paper discuss the limitations of the work performed by the authors? Answer: [Yes] . Justification: Limitations are listed in §8. ## Guidelines: - · The answer NA means that the paper has no limitation while the answer No means that the paper has limitations, but those are not discussed in the paper. - · The authors are encouraged to create a separate "Limitations" §in their paper. - · The paper should point out any strong assumptions and how robust the results are to violations of these assumptions (e.g., independence assumptions, noiseless settings, model well-specification, asymptotic approximations only holding locally). The authors should reflect on how these assumptions might be violated in practice and what the implications would be. - · The authors should reflect on the scope of the claims made, e.g., if the approach was only tested on a few datasets or with a few runs. In general, empirical results often depend on implicit assumptions, which should be articulated. - · The authors should reflect on the factors that influence the performance of the approach. For example, a facial recognition algorithm may perform poorly when image resolution is low or images are taken in low lighting. Or a speech-to-text system might not be used reliably to provide closed captions for online lectures because it fails to handle technical jargon. - · The authors should discuss the computational efficiency of the proposed algorithms and how they scale with dataset size. - · If applicable, the authors should discuss possible limitations of their approach to address problems of privacy and fairness. - · While the authors might fear that complete honesty about limitations might be used by reviewers as grounds for rejection, a worse outcome might be that reviewers discover limitations that aren't acknowledged in the paper. The authors should use their best judgment and recognize that individual actions in favor of transparency play an important role in developing norms that preserve the integrity of the community. Reviewers will be specifically instructed to not penalize honesty concerning limitations. ## 3. Theory Assumptions and Proofs Question: For each theoretical result, does the paper provide the full set of assumptions and a complete (and correct) proof? Answer: [NA] . Justification: The paper does not contain theoretical results. ## Guidelines: - · The answer NA means that the paper does not include theoretical results. - · All the theorems, formulas, and proofs in the paper should be numbered and cross-referenced. - · All assumptions should be clearly stated or referenced in the statement of any theorems. - · The proofs can either appear in the main paper or the supplemental material, but if they appear in the supplemental material, the authors are encouraged to provide a short proof sketch to provide intuition. - · Inversely, any informal proof provided in the core of the paper should be complemented by formal proofs provided in appendix or supplemental material. - · Theorems and Lemmas that the proof relies upon should be properly referenced. ## 4. Experimental Result Reproducibility Question: Does the paper fully disclose all the information needed to reproduce the main experimental results of the paper to the extent that it affects the main claims and/or conclusions of the paper (regardless of whether the code and data are provided or not)? Answer: [Yes] Justification: Experimental details are provided in Sections 4.2 and 7, as well as in Appendices C, E, and G. ## Guidelines: - · The answer NA means that the paper does not include experiments. - · If the paper includes experiments, a No answer to this question will not be perceived well by the reviewers: Making the paper reproducible is important, regardless of whether the code and data are provided or not. - · If the contribution is a dataset and/or model, the authors should describe the steps taken to make their results reproducible or verifiable. - · Depending on the contribution, reproducibility can be accomplished in various ways. For example, if the contribution is a novel architecture, describing the architecture fully might suffice, or if the contribution is a specific model and empirical evaluation, it may be necessary to either make it possible for others to replicate the model with the same dataset, or provide access to the model. In general. releasing code and data is often one good way to accomplish this, but reproducibility can also be provided via detailed instructions for how to replicate the results, access to a hosted model (e.g., in the case of a large language model), releasing of a model checkpoint, or other means that are appropriate to the research performed. - · While NeurIPS does not require releasing code, the conference does require all submissions to provide some reasonable avenue for reproducibility, which may depend on the nature of the contribution. For example - (a) If the contribution is primarily a new algorithm, the paper should make it clear how to reproduce that algorithm. - (b) If the contribution is primarily a new model architecture, the paper should describe the architecture clearly and fully. - (c) If the contribution is a new model (e.g., a large language model), then there should either be a way to access this model for reproducing the results or a way to reproduce the model (e.g., with an open-source dataset or instructions for how to construct the dataset). - (d) We recognize that reproducibility may be tricky in some cases, in which case authors are welcome to describe the particular way they provide for reproducibility. In the case of closed-source models, it may be that access to the model is limited in some way (e.g., to registered users), but it should be possible for other researchers to have some path to reproducing or verifying the results. ## 5. Open access to data and code Question: Does the paper provide open access to the data and code, with sufficient instructions to faithfully reproduce the main experimental results, as described in supplemental material? Answer: [Yes] . Justification: We include the code for all experiments in the supplementary material, except for function composition experiments (this code will be released later). The data used in our paper consists of publicly available datasets. ## Guidelines: - · The answer NA means that paper does not include experiments requiring code. - · Please see the NeurIPS code and data submission guidelines ( https://nips.cc/public/ guides/CodeSubmissionPolicy ) for more details. - · While we encourage the release of code and data, we understand that this might not be possible, so 'No' is an acceptable answer. Papers cannot be rejected simply for not including code, unless this is central to the contribution (e.g., for a new open-source benchmark). - · The instructions should contain the exact command and environment needed to run to reproduce the results. See the NeurIPS code and data submission guidelines ( https://nips.cc/public/ guides/CodeSubmissionPolicy ) for more details. - · The authors should provide instructions on data access and preparation, including how to access the raw data, preprocessed data, intermediate data, and generated data, etc. - · The authors should provide scripts to reproduce all experimental results for the new proposed method and baselines. If only a subset of experiments are reproducible, they should state which ones are omitted from the script and why. - · At submission time, to preserve anonymity, the authors should release anonymized versions (if applicable). - · Providing as much information as possible in supplemental material (appended to the paper) is recommended, but including URLs to data and code is permitted. ## 6. Experimental Setting/Details Question: Does the paper specify all the training and test details (e.g., data splits, hyperparameters, how they were chosen, type of optimizer, etc.) necessary to understand the results? Answer: [Yes] Justification: Data splits are described in §4.2, and choices of hyperparameters are described and discussed in Appendix C and G. ## Guidelines: - · The answer NA means that the paper does not include experiments. - · The experimental setting should be presented in the core of the paper to a level of detail that is necessary to appreciate the results and make sense of them. - · The full details can be provided either with the code, in appendix, or as supplemental material. ## 7. Experiment Statistical Significance Question: Does the paper report error bars suitably and correctly defined or other appropriate information about the statistical significance of the experiments? Answer: [Yes] Justification: Error bars are reported in each plot, and their meaning is described in §4.2. ## Guidelines: - · The answer NA means that the paper does not include experiments. - · The authors should answer "Yes" if the results are accompanied by error bars, confidence intervals, or statistical significance tests, at least for the experiments that support the main claims of the paper. - · The factors of variability that the error bars are capturing should be clearly stated (for example, train/test split, initialization, random drawing of some parameter, or overall run with given experimental conditions). - · The method for calculating the error bars should be explained (closed form formula, call to a library function, bootstrap, etc.) - · The assumptions made should be given (e.g., Normally distributed errors). - · It should be clear whether the error bar is the standard deviation or the standard error of the mean. - · It is OK to report 1-sigma error bars, but one should state it. The authors should preferably report a 2-sigma error bar than state that they have a 96% CI, if the hypothesis of Normality of errors is not verified. - · For asymmetric distributions, the authors should be careful not to show in tables or figures symmetric error bars that would yield results that are out of range (e.g. negative error rates). - · If error bars are reported in tables or plots, The authors should explain in the text how they were calculated and reference the corresponding figures or tables in the text. ## 8. Experiments Compute Resources Question: For each experiment, does the paper provide sufficient information on the computer resources (type of compute workers, memory, time of execution) needed to reproduce the experiments? Answer: [Yes] Justification: Our compute usage and resources are described in Appendix F. ## Guidelines: - · The answer NA means that the paper does not include experiments. - · The paper should indicate the type of compute workers CPU or GPU, internal cluster, or cloud provider, including relevant memory and storage. - · The paper should provide the amount of compute required for each of the individual experimental runs as well as estimate the total compute. - · The paper should disclose whether the full research project required more compute than the experiments reported in the paper (e.g., preliminary or failed experiments that didn't make it into the paper). ## 9. Code Of Ethics Question: Does the research conducted in the paper conform, in every respect, with the NeurIPS Code of Ethics https://neurips.cc/public/EthicsGuidelines ? Answer: [Yes] Justification: The paper does not involve human subjects, and does not release new datasets or models. We discuss the potential societal impacts in Appendix D.4. ## Guidelines: - · The answer NA means that the authors have not reviewed the NeurIPS Code of Ethics. - · If the authors answer No, they should explain the special circumstances that require a deviation from the Code of Ethics. - · The authors should make sure to preserve anonymity (e.g., if there is a special consideration due to laws or regulations in their jurisdiction). ## 10. Broader Impacts Question: Does the paper discuss both potential positive societal impacts and negative societal impacts of the work performed? Answer: [Yes] Justification: Appendix D.4 discusses broader impacts. ## Guidelines: - · The answer NA means that there is no societal impact of the work performed. - · If the authors answer NA or No, they should explain why their work has no societal impact or why the paper does not address societal impact. - · Examples of negative societal impacts include potential malicious or unintended uses (e.g., disinformation, generating fake profiles, surveillance), fairness considerations (e.g., deployment of technologies that could make decisions that unfairly impact specific groups), privacy considerations, and security considerations. - · The conference expects that many papers will be foundational research and not tied to particular applications, let alone deployments. However, if there is a direct path to any negative applications, the authors should point it out. For example, it is legitimate to point out that an improvement in the quality of generative models could be used to generate deepfakes for disinformation. On the other hand, it is not needed to point out that a generic algorithm for optimizing neural networks could enable people to train models that generate Deepfakes faster. - · The authors should consider possible harms that could arise when the technology is being used as intended and functioning correctly, harms that could arise when the technology is being used as intended but gives incorrect results, and harms following from (intentional or unintentional) misuse of the technology. - · If there are negative societal impacts, the authors could also discuss possible mitigation strategies (e.g., gated release of models, providing defenses in addition to attacks, mechanisms for monitoring misuse, mechanisms to monitor how a system learns from feedback over time, improving the efficiency and accessibility of ML). ## 11. Safeguards Question: Does the paper describe safeguards that have been put in place for responsible release of data or models that have a high risk for misuse (e.g., pretrained language models, image generators, or scraped datasets)? Answer: [NA] Justification: We don't release data or models. ## Guidelines: - · The answer NA means that the paper poses no such risks. - · Released models that have a high risk for misuse or dual-use should be released with necessary safeguards to allow for controlled use of the model, for example by requiring that users adhere to usage guidelines or restrictions to access the model or implementing safety filters. - · Datasets that have been scraped from the Internet could pose safety risks. The authors should describe how they avoided releasing unsafe images. - · We recognize that providing effective safeguards is challenging, and many papers do not require this, but we encourage authors to take this into account and make a best faith effort. ## 12. Licenses for existing assets Question: Are the creators or original owners of assets (e.g., code, data, models), used in the paper, properly credited and are the license and terms of use explicitly mentioned and properly respected? Answer: [Yes] Justification: We are the original creators of the code. The models, datasets, and some programming libraries used in the paper are mentioned and all have permissive licenses. ## Guidelines: - · The answer NA means that the paper does not use existing assets. - · The authors should cite the original paper that produced the code package or dataset. - · The authors should state which version of the asset is used and, if possible, include a URL. - · The name of the license (e.g., CC-BY 4.0) should be included for each asset. - · For scraped data from a particular source (e.g., website), the copyright and terms of service of that source should be provided. - · If assets are released, the license, copyright information, and terms of use in the package should be provided. For popular datasets, paperswithcode.com/datasets has curated licenses for some datasets. Their licensing guide can help determine the license of a dataset. - · For existing datasets that are re-packaged, both the original license and the license of the derived asset (if it has changed) should be provided. - · If this information is not available online, the authors are encouraged to reach out to the asset's creators. ## 13. New Assets Question: Are new assets introduced in the paper well documented and is the documentation provided alongside the assets? Answer: [NA] Justification: The paper does not introduce new assets. ## Guidelines: - · The answer NA means that the paper does not release new assets. - · Researchers should communicate the details of the dataset/code/model as part of their submissions via structured templates. This includes details about training, license, limitations, etc. - · The paper should discuss whether and how consent was obtained from people whose asset is used. - · At submission time, remember to anonymize your assets (if applicable). You can either create an anonymized URL or include an anonymized zip file. ## 14. Crowdsourcing and Research with Human Subjects Question: For crowdsourcing experiments and research with human subjects, does the paper include the full text of instructions given to participants and screenshots, if applicable, as well as details about compensation (if any)? Answer: [NA] Justification: This work does not include research with human subjects. ## Guidelines: - · The answer NA means that the paper does not involve crowdsourcing nor research with human subjects. - · Including this information in the supplemental material is fine, but if the main contribution of the paper involves human subjects, then as much detail as possible should be included in the main paper. - · According to the NeurIPS Code of Ethics, workers involved in data collection, curation, or other labor should be paid at least the minimum wage in the country of the data collector. ## 15. Institutional Review Board (IRB) Approvals or Equivalent for Research with Human Subjects Question: Does the paper describe potential risks incurred by study participants, whether such risks were disclosed to the subjects, and whether Institutional Review Board (IRB) approvals (or an equivalent approval/review based on the requirements of your country or institution) were obtained? Answer: [NA] Justification: This work does not include research with human subjects. ## Guidelines: - · The answer NA means that the paper does not involve crowdsourcing nor research with human subjects. - · Depending on the country in which research is conducted, IRB approval (or equivalent) may be required for any human subjects research. If you obtained IRB approval, you should clearly state this in the paper. - · We recognize that the procedures for this may vary significantly between institutions and locations, and we expect authors to adhere to the NeurIPS Code of Ethics and the guidelines for their institution. - · For initial submissions, do not include any information that would break anonymity (if applicable), such as the institution conducting the review.
zxSWIdyW3A
Cooperative Hardware-Prompt Learning for Snapshot Compressive Imaging
Existing reconstruction models in snapshot compressive imaging systems (SCI) are trained with a single well-calibrated hardware instance, making their perfor- mance vulnerable to hardware shifts and limited in adapting to multiple hardware configurations. To facilitate cross-hardware learning, previous efforts attempt to directly collect multi-hardware data and perform centralized training, which is impractical due to severe user data privacy concerns and hardware heterogeneity across different platforms/institutions. In this study, we explicitly consider data privacy and heterogeneity in cooperatively optimizing SCI systems by proposing a Federated Hardware-Prompt learning (FedHP) framework. Rather than mitigating the client drift by rectifying the gradients, which only takes effect on the learning manifold but fails to solve the heterogeneity rooted in the input data space, FedHP learns a hardware-conditioned prompter to align inconsistent data distribution across clients, serving as an indicator of the data inconsistency among different hardware (e.g., coded apertures). Extensive experimental results demonstrate that the proposed FedHP coordinates the pre-trained model to multiple hardware con- figurations, outperforming prevalent FL frameworks for 0.35dB under challenging heterogeneous settings. Moreover, a Snapshot Spectral Heterogeneous Dataset has been built upon multiple practical SCI systems. Data and code are aveilable at https://github.com/Jiamian-Wang/FedHP-Snapshot-Compressive-Imaging.git
https://openreview.net/pdf/9784818faf1b61e993e8c55556f64ad6c612ecad.pdf
[ { "confidence": 3, "rating": 5, "review_id": "OYqo48ZEnI", "review_text": "The authors present a Federated Hardware-Prompt learning (FedHP) framework to address the fact that compressive snapshot spectral imaging devices may not be easily tuneable against changes in the coded aperture, and that in fact the said access to coded apertures may not be possible due to privacy reasons. The authors solve this by hardware prompt learning, which essentially learns from observing diverse coded aperture samples of all clients, regularizing the input data space and achieving the goal of coping with heterogeneity sourcing from hardware. The results show on a specific dataset improvement across all 10 samples in terms of spectral reconstruction quality. The comparison is primarily in the sense of federated learning approaches.\n\nTypo: figure 3 caption -> colorblue shouldn’t be there\n\nThe presentation is somewhat accessible to a generally knowledgeable non-expert in federated learning, in that the purposes are clear.\n\nThe biggest weakness is arguably that the paper covers a somewhat very niche topic, which is the application of a federated learning scheme to compressive snapshot spectral imaging. To some extent one would expect the technique to abstract away from the specific case of CASSI, as the solution does not particularly pertain to CASSI.\n\nIn addition, due to limited data available in this setup and to very limited size datasets, it is difficult to ascertain the significance of the findings.\n\nCan the authors extend this to any other compressive imaging scheme? Or perhaps disentangle the improvements in terms of FedHP, from those specific to the application? This would also broaden the data available for validating the experiments." }, { "confidence": 4, "rating": 5, "review_id": "eO0MWT6rHh", "review_text": "The paper addresses the challenges faced in snapshot compressive imaging (SCI) systems due to hardware shifts and the need for adaptability across multiple hardware configurations. By introducing a hardware-prompt network and leveraging federated learning, the framework enhances the adaptability and performance of SCI models across different hardware configurations.\n\n1. The manuscript is well-organized with a clear and logical structure that enhances the readability of the content.\n2. The paper provides a detailed background on SCI and FL. The planned release of the Snapshot Spectral Heterogeneous Dataset (SSHD) will significantly aid future research.\n3. Using different coded apertures for different clients closely mirrors real-world scenarios, adding significant practical relevance to the study.\n\n1. The literature review on federated learning (FL) heterogeneity in the Introduction section lacks comprehensiveness. There are numerous recent papers addressing heterogeneity in FL that are not cited here. Additionally, the references included are somewhat outdated. Including more current and diverse references would strengthen the review and provide a more accurate context for the study.\n2. the manuscript explains that the coded apertures for each client follow a specific distribution Pc, it does not provide further details about the exact nature or type of this distribution.\n3. There are many ways to partition data to construct heterogeneous scenarios, such as practical and pathological methods. The approach of equally splitting the training dataset according to the number of clients is not very convincing. The authors should try different partitioning methods.\n4. It is unclear which datasets were used to obtain the experimental results in Tables 1 and 2. The authors did not specify this, which creates confusion in the experimental analysis.\n\n1. What is the rationale for using adaptors, and what is their function?\n2. What network models are used in the comparison methods? It is necessary to clearly state the fairness of the validated methods.\n3. The explanation for Figure 3 is not detailed enough. For example, what is \"Patch\"?" }, { "confidence": 4, "rating": 6, "review_id": "QAXeKTfhaH", "review_text": "Most existing reconstruction models in snapshot compressive imaging systems are trained using a single hardware configuration, making them highly susceptible to hardware variations. Previous approaches attempted to address this issue by centralizing data from multiple hardware configurations for training, but this proved difficult due to hardware heterogeneity across different platforms and privacy concerns. This paper proposes a Federated Hardware-Prompt Learning (FedHP) framework, which aligns data distributions across different hardware configurations by correcting the data distribution at the source, thereby enabling the trained model to adapt to multiple hardware configurations. The performance on existing datasets shows an improvement compared to previous popular training frameworks. Additionally, the authors have released their own created dataset and code.\n\n1.Previous work focused on the data itself, directly correcting various types of data through network models. In contrast, the authors of this paper focus on the root cause of the differences—hardware. They address the issue from the perspective of learning the differences in hardware.\n2.The method proposed by the authors has achieved excellent performance compared to existing mainstream methods, and the average performance has also improved.\n\n1.The number of clients used in the experiments is still relatively small. Although a simple comparison of the impact of different numbers of clients was made, there is not much difference in performance compared to other methods when the number of clients is larger.\n2.Although good results were reported on simulated data, more results on real data should be included to evaluate the effectiveness of the proosed method.\n\nWhy does the prompter lead to such a significant improvement, while the effect of the adaptor is not as pronounced? Please provide an in-depth analysis." }, { "confidence": 4, "rating": 6, "review_id": "Aa3IgBYymf", "review_text": "The paper introduces FedHP, a reconstruction method for snapshot compressive imaging systems, which addresses the challenge of cross-hardware learning by proposing a federated learning approach. The key contribution lies in using a hardware-conditioned prompter to align data distributions across different hardware configurations, thereby enhancing the adaptability of pre-trained models without compromising data privacy.\n\n1. The writing of the paper is good, making it easy to read and follow with clear arguments.\n2. The problem defined in the paper is novel with a clear motivation, providing good inspiration for solving the issue of inconsistent device configurations in snapshot compressive imaging.\n3. The proposed method is clear and the conclusions are relatively convincing. Overall, it is an interesting work.\n\n1. There are some typos in the writing. For example, the caption of Figure 3 and the bold parts in the second row of Table 1 and the eighth row of Table 2 are confusing.\n2. The proposed FedHP method is relatively straightforward and lacks deeper insights. Moreover, it does not show a significant performance improvement compared to FedAvg.\n3. The experiments are not comprehensive enough. Given that this work aims to address the snapshot compressive imaging (SCI) problem, I suggest adding experiments to test the applicability of other SCI systems, such as Coded Aperture Compressive Temporal Imaging (CACTI).\n4. There is a lack of sufficient real-world experiments. It would be beneficial to set up multiple independent SCI systems to test the algorithm's performance. Including reconstruction results obtained from these real-world systems is recommended.\n\n1. All the experiments in this paper are based on the SD-CASSI model. Can the same FedHP model be simultaneously applicable to both DD-CASSI and SD-CASSI architectures, which have significantly different designs?\n2. Although the proposed method outperforms other algorithms in terms of performance metrics, there are still many artifacts in the reconstructed images. While I understand that this is maybe due to the precision issues of the CASSI system, it is crucial for evaluating the practical usability of the algorithm. Additionally, I am not sure whether the spectral accuracy of the reconstructed images is also optimal in statistical terms, which is vital for spectral imaging systems.\n3. Furthermore, if possible, I hope the authors can also address the concerns I raised in the Weaknesses section." } ]
## Cooperative Hardware-Prompt Learning for Snapshot Compressive Imaging Jiamian Wang 1 ∗ , Zongliang Wu 2 3 , , Yulun Zhang , Xin Yuan , Tao Lin , Zhiqiang Tao 4 2 2 1 ∗ 1 Rochester Institute of Technology, 2 Westlake University, 3 Zhejiang University, 4 Shanghai Jiao Tong University ## Abstract Existing reconstruction models in snapshot compressive imaging systems (SCI) are trained with a single well-calibrated hardware instance, making their performance vulnerable to hardware shifts and limited in adapting to multiple hardware configurations. To facilitate cross-hardware learning, previous efforts attempt to directly collect multi-hardware data and perform centralized training, which is impractical due to severe user data privacy concerns and hardware heterogeneity across different platforms/institutions. In this study, we explicitly consider data privacy and heterogeneity in cooperatively optimizing SCI systems by proposing a Federated Hardware-Prompt learning (FedHP) framework. Rather than mitigating the client drift by rectifying the gradients, which only takes effect on the learning manifold but fails to solve the heterogeneity rooted in the input data space, FedHP learns a hardware-conditioned prompter to align inconsistent data distribution across clients, serving as an indicator of the data inconsistency among different hardware (e.g., coded apertures). Extensive experimental results demonstrate that the proposed FedHP coordinates the pre-trained model to multiple hardware configurations, outperforming prevalent FL frameworks for 0 35 . dB under challenging heterogeneous settings. Moreover, a Snapshot Spectral Heterogeneous Dataset has been built upon multiple practical SCI systems. Data and code are aveilable at https://github.com/Jiamian-Wang/FedHP-Snapshot-Compressive-Imaging.git ## 1 Introduction The technology of snapshot compressive imaging (SCI) [Yuan et al., 2021] has gained prominence in the realm of computational imaging. Taking an example of hyperspectral image reconstruction, the spectral SCI [Gehm et al., 2007] can fast capture and compress 3D hyperspectral signals as 2D measurements through optical hardware, and then restore the original signals with high fidelity by training deep neural networks [Meng et al., 2020, Miao et al., 2019]. Despite the remarkable performance [Cai et al., 2022a,b, Lin et al., 2022, Huang et al., 2021, Hu et al., 2022], existing deep SCI methods are generally trained with a specific hardware configuration, e.g. , a well-calibrated coded aperture (physical mask). The resulting model is vulnerable to hardware shift/perturbation and limited in adapting to multiple hardware configurations. However, directly learning a reconstruction model cooperatively from multi-hardware seems to be infeasible due to data proprietary constraint. It is also non-trivial to coordinate heterogeneous hardware instances with a unified model. To elaborate, we first recap previous research efforts of centralized learning solutions. A naive solution is to jointly train a single reconstruction model with data collected from different hardware configurations, i.e. , coded apertures. As shown in Fig. 1 right , this solution enhances the ability of reconstruction ( 0 5 . dB + ) by comparison to a single hardware training scenario. However, the ∗ Corresponding authors: Jiamian Wang ( [email protected] ) and Zhiqiang Tao ( [email protected] ) Figure 1: Comparison of hyperspectral reconstruction learning strategies. (1) The model trained with the single hardware ( Prevalent treatment ) hardly handles other hardware. Both (2) Jointly train and (3) Self-tuning [Wang et al., 2022] are centralized training solutions. Both (4) FedAvg and the proposed (5) FedHP adopt the same data split setting. We compare the performance gain of different methods over (1). All results are evaluated by unseen masks (non-overlapping) sampled from the practical mask distributions { P , P 1 2 , P 3 } . FedHP learns a prompt network Φ( ) · for cooperation. <!-- image --> performance on inconsistent coded apertures is still non-guaranteed since the model only learns to fit coded apertures in a purely data-driven manner. Followed by, self-tuning [Wang et al., 2022] advances the learning by approximating the posterior distribution of coded apertures in a variational Bayesian framework. Despite the significant performance boost, it is only compatible with the coded apertures drawing from homogeneous hardware (same distribution) yet cannot handle heterogeneous hardware. Nevertheless, centralized learning presumes that hardware instances and hyperspectral data are always publicly available, which hardly holds in practice - both the optical systems (with different confidential configurations, e.g. , coded apertures) and data samples ( i.e. , measurements captured from non-overlapping scenes) are generally proprietary assets across institutions, adhering to the strict privacy policy constraints [Vergara-Laurens et al., 2016, Li et al., 2021], while considering the multi-hardware cooperative training confining to this concern remains unexplored. In this work, we leverage federated learning (FL) [Kairouz et al., 2021, Li et al., 2020a, Wang et al., 2021] for cross-platform/silo multi-hardware reconstruction modeling without sharing the hardware configurations and local training data. Firstly, the FL benchmark, FedAvg [McMahan et al., 2017], is adopted and brings performance boost (compared by 3 and 4 in Fig. 1 right ). However, FedAvg has been proven to be limited in solving heterogeneous data [Hsu et al., 2019, Karimireddy et al., 2020] - the heterogeneity in SCI substantially stems from the hardware, which is usually absorbed into the compressed data and governs the network training. Thus, different configurations, e.g. , coded apertures, yield different data distributions. Besides, we consider a more practical scenario by extending the sample-wise hardware difference into distribution-wise, i.e. , not only the different coded apertures yield heterogeneity, but also coded apertures from different clients may follow different distributions (see P 1 ∼ P 3 in Fig. 1). To adress the heterogeneity issue, this work proposes a Federated Hardware-Prompt (FedHP) framework to achieve multi-hardware cooperative learning with privacy piratically preserved. Prevalent FL methods handle the heterogeneity by regularizing the global/local gradients [Karimireddy et al., 2020, Li et al., 2020b], which only take effect on the learning manifold but fail to solve the heterogeneity rooted in the input data space. Differently, FedHP traces back to the source of the data heterogeneity of this application, i.e. , inconsistent hardware configurations, and devises a prompt network to solve the client drift issue in input data space. By taking the coded aperture as input, the prompter better accounts for the underlying inconsistency and closes the gap between input data distributions across clients. Besides, the prompter explicitly models the correlation between the software and hardware, empowering the learning by following the spirit of the co-optimization [Goudreault et al., 2023, Zheng et al., 2021, Robidoux et al., 2021] in computational imaging. In addition, FedHP directly operates on pre-trained reconstruction backbones with locally well-trained models and keeps them frozen throughout the learning, which improves the training efficiency than directly optimizing the reconstruction backbones in FL from scratch. We summarize the contributions as follows. - · We introduce and tackle an unexplored problem of hardware cooperative learning in SCI, under the presence of data privacy constraints and heterogeneous configurations. To our best knowledge, the proposed FedHP first integrates federated learning into spectral SCI. - · We uncover the data heterogeneity of SCI that stems from distinct hardware configurations. A hardware prompt module is developed to solve the distribution shift across clients and empower the hardware-software co-optimization in computational imaging. The proposed method provides an orthogonal perspective in handling the heterogeneity of the existing FL practices. - · We build a new Snapshot Spectral Heterogeneous Dataset (SSHD) from multiple practical spectral snapshot imaging systems. Extensive experiments demonstrate that FedHP outperforms both centralized learning methods and classic federated learning frameworks. The proposed method can inspire future work in this novel research direction of hardware collaboration in SCI. ## 2 Method ## 2.1 Preliminary Knowledge We study the cooperative learning problem by taking the representative setup of coded aperture snapshot spectral imaging system for hyperspectral imaging as an example, due to its recent advances [Cai et al., 2022a,b, Lin et al., 2022]. Given the real-world hyperspectral signal X ∈ R H × W N × λ , where N λ denotes the number of spectral channels, the hardware performs the compression with the physical coded apterture M of the size H × W , i.e. , M hw ∈ [0 , 1] . Accordingly, the encoding process produces a 2D measurement Y M ∈ R H × ( W +∆) , where ∆ denotes the shifting <!-- formula-not-decoded --> where ⊙ denotes the pixel-wise multiplication and Ω presents the measurement noise. For each spectral wavelength λ , the corresponding signal X (: , : , n λ ) is shifted according to the function d λ ( -λ ∗ ) by referring to the pre-defined anchor wavelength λ ∗ , such that ∆ = d N ( λ -1) . Following the optical encoder, recent practices train a deep reconstruction network f ( ) · to retrieve the hyperspectral data ̂ X ∈ R H × W N × λ by taking the 2D measurement Y M as input. We define the initial training dataset as D and the corresponding dataset for the reconstruction as D M ∗ <!-- formula-not-decoded --> where X i is the ground truth and Y M ∗ i is governed by a specific coded aperture M ∗ . The reconstruction model finds the local optimum by minimizing the mean squared loss <!-- formula-not-decoded --> where θ expresses all learnable parameters in the reconstruction model. ̂ X i = f ( ̂ θ ; Y M ∗ i ) is the prediction. Pre-trained reconstruction models [Cai et al., 2022a, Huang et al., 2021] demonstrates promising performance when is compatible with a single encoder set-up, where the measurement in training and testing phases are produced by the same hardware using a fixed coded aperture of M ∗ . ̸ Motivation . Previous work [Wang et al., 2022] uncovered that most existing reconstruction models experience large performance descent ( e.g. , &gt; 2 dB in terms of PSNR) when handling the data encoded by a different coded aperture M † from training, i.e. , M † = M ∗ as mask determines the data distribution and also takes effect in learning as (3). Thus, a well-trained reconstruction model can be highly sensitive to a specific hardware configuration of coded aperture and is hardly compatible with the other optical systems in the testing phase. A simple solution of adapting the reconstruction network to a different coded aperture M † is to retrain the model with corresponding dataset D M † = { Y M † i , X i } i = N i =1 and then test upon M † accordingly. However, this solution does not broaden the adaptability of reconstruction models to multi-hardware and can introduce drastic computation overhead. In this work, we tackle this challenge by learning a reconstruction model cooperatively from multiple hardware with inconsistent configurations. ## 2.2 Centralized Learning in SCI Jointly Train . To solve the above problem, Jointly train (Fig. 1 part 2 ) serves as a naive solution to train a model with data jointly collected upon a series of hardware. Assuming there are total number of K hardware with different coded apertures, i.e. , M M 1 , 2 , ..., M K . Each hardware produces a training dataset upon D as D M k = { Y M k i , X i } i = N i =1 . The joint training dataset for reconstruction is <!-- formula-not-decoded --> where different coded apertures can be regarded as hardware-driven data augmentation treatments toward the hyperspectral data. The reconstruction model will be trained with the same mean squared loss provided in (3) upon D M 1 ∼ K . [Wang et al., 2022] demonstrated that jointly learning brings performance boost compared with single mask training (Fig. 1 right ). However, this method adopts a single well-trained model to handle coded apertures, failing to adaptively cope with the underlying discrepancies and thus, leading to compromised performances for different hardware. Self-tuning . Following Jointly train , recent work of Self-tuning [Wang et al., 2022] recognizes the coded aperture that plays the role of hyperprameter of the reconstruction network, and develops a hyper-net to explicitly model the posterior distribution of the coded aperture by observing D M 1 ∼ K . Specifically, the hyper-net h σ ( ; M k ) approximates P ( M |D M 1 ∼ K ) by minimizing the Kullback-Leibler divergence between this posterior and a variational distribution Q ( M ) parameterized by σ . Compared with Jointly train , Self-tuning learns to adapt to different coded apertures and appropriately calibrates the reconstruction network during training, even if there are unseen coded apertures. However, the variational Bayesian learning poses a strict distribution constraint to the sampled coded apertures, which limits the scope of Self-tuning under the practical setting. To sum up, both of the Jointly train and Self-tuning are representative solutions of centralized learning, where the dataset D and hardware instances with M 1 , ..., M K from different sources are presumed to be publicly available. Such a setting has two-fold limitations. (1) Centralized learning does not take the privacy concern into consideration. Hardware configuration and data information sharing across institutions is subject to the rigorous policy constraint. (2) Existing centralized learning methods mainly consider the scenario where coded apertures are sampled from the same distribution, i.e. , hardware origin from the same source, which is problematic when it comes to the coded aperture distribution inconsistency especially in the cross-silo case. Bearing the above challenges, in the following, we resort to the federated learning (FL) methods to solve the cooperative learning of reconstruction considering the privacy and hardware configuration inconsistency. ## 2.3 Federated Learning in SCI FedAvg . We firstly tailor FedAvg [McMahan et al., 2017], into SCI. Specifically, we exploit a practical setting of cross-silo learning in snapshot compressive imaging. Suppose there are C clients, where each client is packaged with a group of hardware following a specific distribution of P c <!-- formula-not-decoded --> where M c k represents k -th sampled coded aperture in c -th client. For simplicity, we use M c to denote arbitrary coded aperture sample in c -th client as shown in Eq. (5). Based on the hardware, each client computes a paired dataset D M c from the local hyperspectral dataset D c <!-- formula-not-decoded --> where N c represents the number of hyperspectral data in D c . The local learning objective is <!-- formula-not-decoded --> where ̂ X i = f ( ̂ θ ; Y M c i ) , M c ∼ P c , we use θ to denote the learnable parameters of reconstruction model at a client. FedAvg learns a global model θ G without sharing the hyperspectral signal dataset D c , D M c , and M c across different clients. Specifically, the global learning objective ℓ G ( θ ) is <!-- formula-not-decoded --> where C ′ denotes the number of clients that participate in the current global round and α c represents the aggregation weight. Compared with the centralized learning solutions, FedAvg not only bridges the local hyperspectral data without sharing sensitive information, but also collaborates multihardware with a unified reconstruction model for a better performance (Fig. 1 right comparison between 3 and 4 ). However, FedAvg shows limitations in two-folds. (1) It has been shown that Figure 2: Learning process of FedHP. We take one global round as an example, which consists of (1) Initialize , (2) Local Update (Prompt) , (3) Local Update (Adaptor) , and (4) Aggregation . For each client, the reconstruction backbone ( θ p c ), is initialized as pre-trained model upon local training dataset D c and kept as frozen throughout the training. The prompt net upon hardware configuration, i.e. , coded aperture, takes effect on the input data of reconstruction, i.e. , Y M . Adaptors are introduced to enhance the learning, where ϵ c denotes the parameters of all adaptors. <!-- image --> FedAvg is hard to handle the heterogeneous data [Karimireddy et al., 2020, Khaled et al., 2020, Hsu et al., 2019]. (2) Directly training the reconstruction backbones from scratch would introduce prohibitive computation. Next, we firstly introduce the hardware-induced data heterogeneity in SCI. Then we develop a Federated Hardware-Prompt (FedHP) method to achieve cooperative learning without optimizing the client backbones. ̸ Data Heterogeneity . We firstly consider the data heterogeneity stems from the different coded apertures samples , i.e. , hardware instances. According to Section 2.1, the optical hardware samples the hyperspectral signal X i from D = { X i } i = N i =1 and encodes it into a 2D measurement Y M i , which constitutes D M and further serves as the input data for the reconstruction model. To this end, the modality of { Y M i } i =1 i = N is vulnerable to the coded aperture variation. A single coded aperture M defines a unique input data distribution for the reconstruction, i.e. , Y M i ∼ P M ( Y M i ) . For arbitrary distinct coded apertures, we have P M ∗ ( Y M ∗ i ) = P M † ( Y M † i ) if M ∗ = M † . In federated learning, data heterogeneity persistently exists since there is no identical coded aperture across different clients. Such a heterogeneous scenario, i.e. , sampling non-overlapping masks from the same mask distribution, can be caused by lightning distortion or optical platform fluttering. ̸ ̸ ̸ We take a step further to consider the other type of data heterogeneity stemming from the distinct distributions of coded apertures 2 . As formulated in (6), each client collects a coded aperture assemble following the distribution P c for c -th client. We have P c differs from one another, i.e. , P c 1 = P c 2 for c 1 = c 2 , c 1 , c 2 ∈ { 1 , ..., C } . Hardware instances from different clients are produced by distinct manufacturing agencies, so that the distribution P c 1 and P c 2 drastically differs as demonstrated in Fig. 1. This is a more challenging scenario than previous case. As presented in Section 3.2, classic federated learning methods, e.g. , FedProx [Li et al., 2020b] and SCAFFOLD [Karimireddy et al., 2020] hardly converge while the proposed method enables an obvious performance boost. ## 2.4 FedHP: Federated Hardware-Prompt Learning Hardware-Prompt Learning . Bearing the heterogeneous issue, previous efforts [Li et al., 2020b, Karimireddy et al., 2020] mainly focus on rectifying the global/local gradients upon training, which only takes effect on the learning manifold but fail to solve the heterogeneity rooted in the input data space, whose effectiveness in this low-level vision task may be limited. Since we uncover two types of the heterogeneity in snapshot compressive imaging stemming from the hardware inconsistency (Section. 2.3), this work opts to tackling the client drift issue by directly operating in the input data space. This can be achieved by collaboratively learning the input data alignment given different coded apertures. In light of the visual prompt tuning in large models [Liu et al., 2023b, Bahng et al., 2022], we devise a hardware-conditioned prompt network in the following. As shown in the Step 2 of Fig. 2, given the input data { Y M i } i = N i =1 of the reconstruction, the prompt network aligns the input samples, i.e. , measurements Y M i , by adding a prompter conditioned on the hardware configuration. Let Φ( ϕ ; M ) denote the prompt network ( e.g. , attention block) parameterized 2 We presume that the hyperspectral single dataset D c , c = 1 , ..., C , shares the same distribution by generally capturing the natural scenes. Heterogeneity stems from the hyperspectral signal is out of the scope of this work. <!-- formula-not-decoded --> In the proposed method, the prompt network collaborates different clients with inconsistent hardware configurations. It takes effect by implicitly observing and collecting diverse coded aperture samples of all clients, and jointly learns to react to different hardware settings. The prompter regularizes the input data space and achieves the goal of coping with heterogeneity sourcing from hardware. Training . As shown in Fig. 2, we demonstrate the training process of proposed FedHP by taking one global round as an example 3 . Since the prompt learning takes effect on pre-trained models, we initialize the c -th backbone parameters with the pre-trained model θ p c on local data D M c with (7). The global prompt network ϕ G is randomly initialized and distributed to the c -th client <!-- formula-not-decoded --> where ϕ c is the local prompt network, and C ′ denotes the number of clients participated in the current global round. To enable better response of the pre-trained backbone toward the aligned input data space, we also introduce the adaptors into the transformer backbone. As shown in Fig. 2 Step 3 , we show the architecture of the proposed adaptor, which is a CONV GELU CONV --structure governed by a residual connection. We insert the adaptors behind the LN layers. We perform local updates in each global round. It is composed of two stages. Firstly, we update the local prompt network ϕ c for S p iterations, and fix all the other learnable parameters . The loss is <!-- formula-not-decoded --> where we use ϵ c to represent learnable parameters of all adaptors for c -th client. Secondly, we tune the adaptors for another S b iterations. Both of the pre-trained backbone and prompt network are frozen. The loss of c -th client shares the same formulation as (11). After the local update, FedHP uploads and aggregates the learnable parameters ϕ c , c = 1 , ..., C of the prompt network. Since the proposed method does not require to optimize and communicate the reconstruction backbones, the underlying cost is drastically reduced considering the marginal model size of prompt network and adpators compared with the backbone, which potentially serves as a supplied benefit of FedHP. Compared with FedAvg, FedHP adopts the hardware prompt to explicitly align the input data representation and handle the distribution shift attributing to the coded aperture inconsistency or coded aperture distribution discrepancy. ## 3 Experiments ## 3.1 Implementation details Dataset . Following existing practices [Cai et al., 2022b, Lin et al., 2022, Hu et al., 2022, Huang et al., 2021], we adopt the benchmark training dataset of CA VE [Yasuma et al., 2010], which is composed of 32 hyperspectral images with the spatial size as 512 × 512 . Data augmentation techniques of rotation, flipping are employed, producing 205 different training scenes. For the federated learning, we equally split the training dataset according to the number of clients C . The local training dataset are kept and accessed confidentially across clients. Note that one specific coded aperture determines a unique dataset according to (2), the resulting data samples for each client can be much more than 205 /C . We employ the widely-used simulation testing dataset for the quantitative evaluation, which consists of ten 256 × 256 × 28 hyperspectral images collected from KAIST [Choi et al., 2017]. Besides, we use the real testing data with spatial size of 660 × 660 collected by a SD-CASSI system [Meng et al., 2020] for the perceptual evaluation considering the real-world perturbations. Hardware . We collect and will release the first Snapshot Spectral Heterogeneous Dataset (SSHD) containing a series of practical SCI systems, from three agencies, each of which offers a series of coded apertures that correspond to a unique distribution 4 as presented by federated settings in Fig. 2. No identical coded apertures exists among all systems. For the case of inconsistent mask distributions, 3 We provide an algorithm of FedHP in supplementary. 4 More illustrations and distribution visualizations of real collected coded apertures are in supplementary. Table 1: PSNR(dB)/SSIM performance comparison. For different clients, we sample non-overlapping masks from the same mask distribution to train the model and use unseen masks randomly sampled from all clients for testing. We report mean ± std among 100 trials for all methods. | Scene | FedAvg | FedAvg | FedProx | FedProx | SCAFFOLD | SCAFFOLD | FedGST | FedGST | FedHP (ours) | FedHP (ours) | |-------------------|----------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------------|-------------------| | Scene | PSNR | SSIM | PSNR | SSIM | PSNR | SSIM | PSNR | SSIM | PSNR | SSIM | | 1 2 3 4 5 6 7 8 9 | 31.98 ± 0 . 19 | 0.8938 ± 0 . 0025 0.8621 ± 0 . 0041 0.9088 ± 0 . 0019 0.9559 ± 0 . 0018 0.8821 ± 0 . 0044 0.9054 ± 0 . 0025 0.8811 ± 0 . 0027 0.8880 ± 0 . 0023 0.9012 ± 0 . 0019 | 31.85 ± 0 . 21 29.85 ± 0 . 22 30.80 ± 0 . 23 39.41 ± 0 . 22 28.14 ± 0 . 16 30.04 ± 0 . 23 29.60 ± 0 . 20 27.93 ± 0 . 20 31.29 ± 0 . 15 28.48 ± 0 . 15 | 0.8903 ± 0 . 0028 0.8516 ± 0 . 0037 0.8968 ± 0 . 0017 0.9601 ± 0 . 0013 0.8765 ± 0 . 0036 0.9054 ± 0 . 0024 0.8718 ± 0 . 0026 0.8845 ± 0 . 0018 0.8961 ± 0 . 0019 0.8671 ± 0 . 0035 | 31.78 ± 0 . 24 29.81 ± 0 . 19 30.92 ± 0 . 17 39.32 ± 0 . 20 28.08 ± 0 . 14 29.87 ± 0 . 21 29.63 ± 0 . 19 27.74 ± 0 . 31 31.22 ± 0 . 14 28.59 ± 0 . 13 | 0.8886 ± 0 . 0025 0.8473 ± 0 . 0.8961 ± 0 . 0.9565 ± 0 . 0.8742 ± 0 . 0.9011 ± 0 . 0.8708 ± 0 . 0.8802 ± 0 . 0.8929 ± 0 . 0.8626 ± 0 . | 32.02 ± 0 . 14 30.13 ± 0 . 31.19 ± 0 . 38.98 ± 0 . 28.53 ± 0 . 30.29 ± 0 . 29.89 ± 0 . 28.35 ± 0 . 30.80 ± 0 . 28.51 ± 0 . | 0.8918 ± 0 . 0018 0.8519 ± 0 . 0038 0.8975 ± 0 . 0015 0.9513 ± 0 . 0020 0.8743 ± 0 . 0041 0.8949 ± 0 . 0022 0.8786 ± 0 . 0024 0.8752 ± 0 . 0016 0.8880 ± 0 . 0021 | 32.31 ± 0 . 19 30.78 ± 0 . 19 31.62 ± 0 . 25 39.78 ± 0 . 29 28.92 ± 0 . 17 30.77 ± 0 . 22 30.44 ± 0 . 19 28.56 | 0.9026 ± 0 . 0020 | | | 30.49 ± 0 . 21 | | | | | 0031 | 20 | | | 0.8746 ± 0 . 0034 | | | 31.78 ± 0 . 23 | | | | | 0014 | 22 | | | 0.9109 ± 0 . 0018 | | | 39.39 ± 0 . 23 | | | | | 0011 | 27 | | | 0.9633 ± 0 . 0017 | | | 28.70 ± 0 . 16 | | | | | 0032 | 16 | | | 0.8935 ± 0 . 0039 | | | 30.53 ± 0 . 30 | | | | | 0019 | 21 | | | 0.9172 ± 0 . 0019 | | | 30.01 ± 0 . 20 | | | | | 0027 | 18 | | | 0.8884 ± 0 . 0024 | | | 28.60 ± 0 . 31 | | | | | 0018 | 19 | | ± 0 . 32 | 0.8957 ± 0 . 0021 | | | 31.45 ± 0 . 15 | | | | | 0014 | 12 | | 31.34 ± 0 . 13 | 0.9043 ± 0 . 0023 | | 10 | 29.04 ± 0 . 13 | 0.8751 ± 0 . 0022 | | | | 0028 | 13 | 0.8578 ± 0 . 0024 | 29.12 ± 0 . 13 | 0.8835 ± 0 . 0021 | | Avg. | 31.21 ± 0 . 10 | 0.8959 ± 0 . 0017 | 30.76 ± 0 . 10 | 0.8900 ± 0 . 0016 | 30.71 ± 0 . 09 | 0.8872 ± 0 . 0013 | 30.85 ± 0 . 11 | 0.8858 ± 0 . 0017 | 31.35 ± 0 . 10 | 0.9033 ± 0 . 0014 | Figure 3: Reconstruction results on simulation data. The density curves compare the spectral consistency of different methods to the ground truth. We use the same coded aperture for all methods. <!-- image --> we directly assign hardware systems from one source to form a client. We simulate the scenario of non-overlapping masks by distributing coded apertures from one source to different clients. Implementation details . We adopt MST-S [Cai et al., 2022a] as the reconstruction backbone. The prompt network is instantiated by a SwinIR [Liang et al., 2021] block. Limited by the computational resource, we set the number of clients as 3 in main comparison. We empirically find that collaborate such amount of clients can be problematic for popular federated learning methods under the very challenging scenario of data heterogeneity (see Section 3.2). For FL methods, we update all clients throughout the training, i.e. , C ′ = C = 3 . For the proposed method, we pre-train the client backbones from scratch for 4 × 10 4 iterations on their local data. Notably, the total training iterations of different methods are kept as 1 25 . × 10 5 for a fair comparison. The batch is set as 12 . We set the initial learning rate for both of the prompt network and adaptor as α p = α b = 1 × 10 -4 with step schedulers, i.e. , half annealing every 2 × 10 4 iterations. We train the model with an Adam [Kingma and Ba, 2014] optimizer ( β 1 = 0 9 . , β 2 = 0 999 . ). We use PyTorch [Paszke et al., 2017] on an NVIDIA A100 GPU. Compared Methods . We compare FedHP with mainstream FL methods, including FedAvg [McMahan et al., 2017], FedProx [Li et al., 2020b], and SCAFFOLD [Karimireddy et al., 2020]. Besides, GST [Wang et al., 2022] paves the way for the robustness of the reconstruction toward multiple hardware. Thereby, we integrate this method into the FL framework, dubbed as FedGST. All methods require to train and aggregate the entire client backbones. By comparison, FedHP updates and shares the prompt network, outperforming the others with smaller amount of parameters being optimized and communicated. We adopt PSNR and SSIM [Wang et al., 2004] for the quantitative evaluation. ## 3.2 Performance Simulation Results . We quantitatively compare different methods in Table 1 by considering the data heterogeneity stems from non-overlapping masks. FedHP performs better than the classic federated Table 2: PSNR(dB)/SSIM performance comparison. Masks from each client are sampled from a specific distribution for training. We randomly sample non-overlapping masks (unseen to training) from all distributions for testing. We report mean ± std among 100 trials for all methods. | Scene | FedAvg | FedAvg | FedProx | FedProx | SCAFFOLD | SCAFFOLD | FedGST | FedGST | FedHP (ours) | FedHP (ours) | |-------------------|----------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------|-------------------| | Scene | PSNR | SSIM | PSNR | SSIM | PSNR | SSIM | PSNR | SSIM | PSNR | SSIM | | 1 2 3 4 5 6 7 8 9 | 29.15 ± 0 . 09 | 0.8392 ± 0 . 0065 0.8102 ± 0 . 0052 0.8464 ± 0 . 0083 0.9369 ± 0 . 0036 0.8037 ± 0 . 0069 0.8655 ± 0 . 0041 0.8042 ± 0 . 0094 0.8473 ± 0 . 0030 0.8541 ± 0 . 0074 0.8075 ± 0 . 0035 | 23.01 ± 0 . 11 20.91 ± 0 . 08 17.57 ± 0 . 11 23.08 ± 0 . 25 18.99 ± 0 . 07 19.10 ± 0 . 04 20.15 ± 0 . 09 19.89 ± 0 . 07 18.33 ± 0 . 11 20.06 ± 0 . 12 | 0.5540 ± 0 . 0069 0.4486 ± 0 . 0052 0.4621 ± 0 . 0082 0.4856 ± 0 . 0036 0.4316 ± 0 . 0082 0.4077 ± 0 . 0041 0.4903 ± 0 . 0093 0.4402 ± 0 . 0031 0.4285 ± 0 . 0071 0.3461 ± 0 . 0036 | 22.99 ± 0 . 13 20.89 ± 0 . 09 17.58 ± 0 . 12 23.00 ± 0 . 30 18.99 ± 0 . 06 19.10 ± 0 . 04 20.14 ± 0 . 09 19.89 ± 0 . 06 18.30 ± 0 . 11 20.03 ± 0 . 13 | 0.5535 ± 0 . 0066 0.4474 ± 0 . 0.4608 ± 0 . 0083 0.4848 ± 0 . 0.4301 ± 0 . 0.4063 ± 0 . 0.4883 ± 0 . 0.4395 ± 0 . 0.4269 ± 0 . 0.3451 ± 0 . | 29.46 ± 0 . 65 27.89 ± 0 . 28.45 ± 0 . 36.12 ± 0 . 26.21 ± 0 . 27.52 ± 0 . 26.88 ± 0 . 26.22 ± 0 . 27.74 ± 0 . 25.72 ± 0 . | 0.8344 ± 0 . 0067 0.7733 ± 0 . 0.8363 ± 0 . 0073 0.9181 ± 0 . 0050 0.7988 ± 0 . 0081 0.8384 ± 0 . 0048 0.7957 ± 0 . 0073 0.8206 ± 0 . 0029 0.8199 ± 0 . 0073 0.7433 ± 0 . 0046 | 30.37 ± 0 . 70 28.67 ± 0 . 38 29.81 ± 0 . 68 37.37 ± 0 . 53 27.47 ± 0 . 73 28.31 ± 0 . 45 28.29 ± 0 . 81 26.54 ± 0 . 45 29.36 ± 0 . 63 | 0.8628 ± 0 . 0084 | | | 28.28 ± 0 . 10 | | | | | 0055 | 36 | 0068 | | 0.8160 ± 0 . 0072 | | | 28.42 ± 0 . 11 | | | | | | 50 | | | 0.8771 ± 0 . 0066 | | | 36.93 ± 0 . 27 | | | | | 0038 | 50 | | | 0.9395 ± 0 . 0032 | | | 25.84 ± 0 . 07 | | | | | 0065 | 52 | | | 0.8487 ± 0 . 0011 | | | 27.28 ± 0 . 04 | | | | | 0042 | 49 | | | 0.8649 ± 0 . 0050 | | | 26.81 ± 0 . 09 | | | | | 0098 | 57 | | | 0.8298 ± 0 . 0108 | | | 25.77 ± 0 . 05 | | | | | 0039 | 44 | | | 0.8470 ± 0 . 0054 | | | 28.30 ± 0 . 09 | | | | | 0078 | 48 | | | 0.8536 ± 0 . 0054 | | 10 | 26.04 ± 0 . 12 | | | | | 0036 | 22 | | 26.78 ± 0 . 26 | 0.8111 ± 0 . 0076 | | Avg. | 28.63 ± 0 . 07 | 0.8496 ± 0 . 0041 | 20.85 ± 0 . 07 | 0.5405 ± 0 . 0059 | 20.00 ± 0 . 09 | 0.4374 ± 0 . 0040 | 28.24 ± 0 . 39 | 0.8177 ± 0 . 0045 | 28.98 ± 0 . 23 | 0.8481 ± 0 . 0054 | Figure 4: Visualization of reconstruction results on real data. Six representative wavelengths are selected. We use the same unseen coded aperture for both FedAvg and FedHP. <!-- image --> 453.3 457.6 462.1 466.8 471.6 476.5 481.6 486.9 492.4 498.0 503.9 509.9... 516.2 522.7 529.5 536.5 543.8 551.4 558.6 567.5 575.3 584.3 594.4 604.2... 614.4 625.1 636.3 648.1 learning methods. By comparison, FedProx and SCAFFOLD only allows sub-optimal performance, which uncovers the limitations of rectifying the gradient directions in this challenging task. Besides, FedGST works inferior than FedHP, since FedGST approximates the posterior and expects coded apertures strictly follows the identical distribution, which can not be guaranteed in practice. In Fig. 3, we visualize the reconstruction results with sampled wavelengths. FedHP not only enables a more granular retrieval on unseen coded aperture, but also maintains a promising spectral consistency as shown by randomly cropped patches ( e.g. , a b , in Fig. 3). Challenging Scenario of Heterogeneity . We consider a more challenging scenario where the data heterogeneity is caused the distinct coded aperture distributions of different clients . We compare different methods in Table 2. All methods experience large performance degradation, among which FedProx and SCAFFOLD becomes ineffective. Intuitively, it is hard to concur the clients under the large distribution gap, while directly adjusting the input data space better tackles the problem. Real Results . In Fig. 4, we visually compare the FedAvg with FedHP on the real data. Specifically, both methods are evaluated under an unseen hardware configuration, i.e. , coded aperture from an uncertain distribution. The proposed method introduces less distortions among different wavelengths. Such an observation endorses FedHP a great potential in collaborating hardware systems practically. ## 3.3 Model Discussion We conduct model discussion in Table 3. Specifically, we accumulate the total cost ( e.g. , number of parameters, GMACs, and training time) of all clients in a federated system. Ablation Study . We firstly consider a scenario that trains three clients independently without FL ( FedHP w/o FL ). For a fair comparison, each client pre-trains the backbone by using the same procedure as FedHP and are then enhanced with a prompt network and adaptors for efficient fine-tuning. By comparison, FedHP enables an obvious improvement ( 0 6 . dB) by implicitly sharing the hardware and data. We then investigate the effectiveness of the prompter and adaptor to the reconstruction, respectively. By observation, directly removing the adaptor leads to limited performance descent. Using prompt network brings significant performance boost. The hardware prompter aligns the input data distributions, potentially solving the heterogeneity rooted in the input data space, considering fact that learning manifold is highly correlated with the coded apertures. Discussion of the client number . In Table 4a, we discuss the power of FedHP with more real clients under the scenario of Hardware shaking . The performance gap between FedHP and FedAvg consistently remains with the client number increasing, which demonstrates the practicability of the FedHP for the cross-silo spectral system cooperative learning, e.g. , 3 ∼ 5 clients/institutions. Table 3: Ablation study and complexity analysis under the non-overlapping masks. The PSNR (dB)/SSIM are computed among 100 testing trials. We report the model complexity and the accumulative training time of all clients ( e.g. , C = 3 ). | Method | Prompter | Adaptor | FL | PSNR | SSIM | # Params (M) | GMACs | Training (days) | |--------------------|------------|-----------|------|----------------|-------------------|----------------|---------|-------------------| | FedAvg | ✗ | ✗ | ✓ | 31.21 ± 0 . 10 | 0.8959 ± 0 . 0017 | 0.12 | 2.85 | 10.62 | | FedHP w/o FL | ✓ | ✓ | ✗ | 30.75 ± 0 . 11 | 0.8890 ± 0 . 0015 | 0.27 | 12.78 | 2.86 | | FedHP w/o Adaptor | ✓ | ✗ | ✓ | 31.09 ± 0 . 10 | 0.8996 ± 0 . 0017 | 0.15 | 11.01 | 2.68 | | FedHP w/o Prompter | ✗ | ✓ | ✓ | 19.19 ± 0 . 01 | 0.2303 ± 0 . 0008 | 0.12 | 2.87 | 2.54 | | FedHP (Full model) | ✓ | ✓ | ✓ | 31.35 ± 0 . 10 | 0.9033 ± 0 . 0014 | 0.27 | 12.78 | 2.86 | Table 4: Model discussions of the proposed FedHP. (a) #Client discussion. Averaged values are reported. (b) Comparison with a deep Unfolding method. | C | FedAvg | FedAvg | FedHP | FedHP | Performance gap | Performance gap | Methods | PSNR(dB) | SSIM | #Params (M) | |-----|----------|----------|---------|---------|-------------------|-------------------|-----------|----------------|-------------------|---------------| | 4 | 31.06 | 0.8955 | 31.33 | 0.9023 | 0.27 | 0.0068 | GAP-Net | 31.07 ± 0 . 20 | 0.8895 ± 0 . 0035 | 3.83 | | 5 | 31.05 | 0.9025 | 31.32 | 0.9029 | 0.27 | 0.0004 | FedHP | 31.35 ± 0 . 10 | 0.9033 ± 0 . 0014 | 0.27 | Comparison with a deep unfolding method . We also compare the proposed FedHP with a representative deep unfolding method of GAP-Net [Meng et al., 2023] as deep unfolding methods can be adaptable to various hardware configurations. Specifically, we use three clients and keep training and testing settings of GAP-Net the same as FedHP. As shown in Table 4b, FedHP improves by 0 28 . dB with only 7% model size. In fact, despite the adaptability, deep unfolding still shows limitations in solving hardware perturbation/replacement for a given system [Wang et al., 2022]. ## 4 Related Work Hyperspectral Image Reconstruction . In hyperspectral image reconstruction (HSI), learning deep reconstruction models [Cai et al., 2022a,b, Lin et al., 2022, Huang et al., 2021, Meng et al., 2020, Hu et al., 2022, Miao et al., 2019] has been the forefront among recent efforts due to highfidelity reconstruction and high-efficiency. Among them, MST [Cai et al., 2022a] devises the first transformer backbone by computing spectral attention. Existing reconstruction learning strategies mainly considers the compatibility toward a single hardware instance. The learned model can be highly sensitive to the variation of hardware. To tackle this practical challenge, GST [Wang et al., 2022] paves the way by proposing a variational Bayesian learning treatment. Federated Learning . Federated learning [Kairouz et al., 2021, Li et al., 2020a, Wang et al., 2021] collaborates client models without sharing the privacy-sensitive assets. However, FL learning suffers from client drift across clients attributing to the data heterogeneity issue. One mainstream [Karimireddy et al., 2020, Li et al., 2020b, Xu et al., 2021, Jhunjhunwala et al., 2023, Reddi et al., 2021] mainly focus on regularizing the global/local gradients. As another direction, personalized FL methods [Collins et al., 2021, Chen and Chao, 2022, Fallah et al., 2020, T Dinh et al., 2020, Jiang and Lin, 2023] propose to fine-tune the global model for better adaptability on clients. However, customizing the global model on client data sacrifices the underlying robustness upon data distribution shift [Wu et al., 2022, Jiang and Lin, 2023], which contradicts with our goal of emphasizing the generality across hardware and thus is not considered. In this work, we propose a federated learning framework to solve the multi-hardware cooperative learning considering the data privacy and heterogeneity, which to the best knowledge, is the first attempt of empowering spectral SCI with FL. Besides, the principle underlying this method can be potentially extended to broad computational imaging applications [Zheng et al., 2021, Liu et al., 2023a, Goudreault et al., 2023, Robidoux et al., 2021] ## 5 Conclusion In this work, we observed an unexplored research scenario of multiple hardware cooperative learning in spectral SCI, considering two practical challenges of privacy constraint and the heterogeneity stemming from inconsistent hardware configurations. We developed a Federated Hardware-Prompt (FedHP) learning framework to solve the distribution shift across clients and empower the hardwaresoftware co-optimization. The proposed method serves as a first attempt to exploit the power of FL in spectral SCI. Besides, we have collected a Snapshot Spectral Heterogeneous Dataset (SSHD) from multiple real spectral SCI systems. Future works may theoretically derive the convergence of FedHP and exploit the behavior of FedHP under a large number of clients. We hope this study will inspire broad explorations in this novel direction of hardware collaboration in SCI. ## References Hyojin Bahng, Ali Jahanian, Swami Sankaranarayanan, and Phillip Isola. Visual prompting: Modifying pixel space to adapt pre-trained models. arXiv preprint arXiv:2203.17274 , 2022. 5 Yuanhao Cai, Jing Lin, Xiaowan Hu, Haoqian Wang, Xin Yuan, Yulun Zhang, Radu Timofte, and Luc Van Gool. Mask-guided spectral-wise transformer for efficient hyperspectral image reconstruction. In CVPR , 2022a. 1, 3, 7, 9 Yuanhao Cai, Jing Lin, Haoqian Wang, Xin Yuan, Henghui Ding, Yulun Zhang, Radu Timofte, and Luc Van Gool. Degradation-aware unfolding half-shuffle transformer for spectral compressive imaging. In NeurIPS , 2022b. 1, 3, 6, 9 Hong-You Chen and Wei-Lun Chao. On bridging generic and personalized federated learning for image classification. In ICLR , 2022. 9 Inchang Choi, MH Kim, D Gutierrez, DS Jeon, and G Nam. High-quality hyperspectral reconstruction using a spectral prior. Technical report, 2017. 6 - Liam Collins, Hamed Hassani, Aryan Mokhtari, and Sanjay Shakkottai. Exploiting shared representations for personalized federated learning. In ICML , 2021. 9 Alireza Fallah, Aryan Mokhtari, and Asuman Ozdaglar. Personalized federated learning with theoretical guarantees: A model-agnostic meta-learning approach. In NeurIPS , 2020. 9 Michael E Gehm, Renu John, David J Brady, Rebecca M Willett, and Timothy J Schulz. Singleshot compressive spectral imaging with a dual-disperser architecture. Optics express , 15(21): 14013-14027, 2007. 1 Félix Goudreault, Dominik Scheuble, Mario Bijelic, Nicolas Robidoux, and Felix Heide. Lidar-inthe-loop hyperparameter optimization. In CVPR , 2023. 2, 9 Tzu-Ming Harry Hsu, Hang Qi, and Matthew Brown. Measuring the effects of non-identical data distribution for federated visual classification. arXiv preprint arXiv:1909.06335 , 2019. 2, 5 Xiaowan Hu, Yuanhao Cai, Jing Lin, Haoqian Wang, Xin Yuan, Yulun Zhang, Radu Timofte, and Luc Van Gool. Hdnet: High-resolution dual-domain learning for spectral compressive imaging. In CVPR , 2022. 1, 6, 9 Tao Huang, Weisheng Dong, Xin Yuan, Jinjian Wu, and Guangming Shi. Deep gaussian scale mixture prior for spectral compressive imaging. In CVPR , 2021. 1, 3, 6, 9 - Divyansh Jhunjhunwala, Shiqiang Wang, and Gauri Joshi. Fedexp: Speeding up federated averaging via extrapolation. In ICLR , 2023. 9 Liangze Jiang and Tao Lin. Test-time robust personalization for federated learning. In ICLR , 2023. 9 Peter Kairouz, H Brendan McMahan, Brendan Avent, Aurélien Bellet, Mehdi Bennis, Arjun Nitin Bhagoji, Kallista Bonawitz, Zachary Charles, Graham Cormode, Rachel Cummings, et al. Advances and open problems in federated learning. Foundations and Trends® in Machine Learning , 14(1-2):1-210, 2021. 2, 9 - Sai Praneeth Karimireddy, Satyen Kale, Mehryar Mohri, Sashank Reddi, Sebastian Stich, and Ananda Theertha Suresh. Scaffold: Stochastic controlled averaging for federated learning. In ICML , 2020. 2, 5, 7, 9, 15 - Ahmed Khaled, Konstantin Mishchenko, and Peter Richtárik. Tighter theory for local sgd on identical and heterogeneous data. In ICAIS , 2020. 5 Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 , 2014. 7 Tian Li, Anit Kumar Sahu, Ameet Talwalkar, and Virginia Smith. Federated learning: Challenges, methods, and future directions. IEEE signal processing magazine , 37(3):50-60, 2020a. 2, 9 Tian Li, Anit Kumar Sahu, Manzil Zaheer, Maziar Sanjabi, Ameet Talwalkar, and Virginia Smith. Federated optimization in heterogeneous networks. In MLSys , 2020b. 2, 5, 7, 9, 15 - Yijing Li, Xiaofeng Tao, Xuefei Zhang, Junjie Liu, and Jin Xu. Privacy-preserved federated learning for autonomous driving. IEEE Transactions on Intelligent Transportation Systems , 23(7):84238434, 2021. 2 - Jingyun Liang, Jiezhang Cao, Guolei Sun, Kai Zhang, Luc Van Gool, and Radu Timofte. Swinir: Image restoration using swin transformer. In ICCV , 2021. 7 - Jing Lin, Yuanhao Cai, Xiaowan Hu, Haoqian Wang, Xin Yuan, Yulun Zhang, Radu Timofte, and Luc Van Gool. Coarse-to-fine sparse transformer for hyperspectral image reconstruction. In ECCV , 2022. 1, 3, 6, 9 - Jiaming Liu, Rushil Anirudh, Jayaraman J Thiagarajan, Stewart He, K Aditya Mohan, Ulugbek S Kamilov, and Hyojin Kim. Dolce: A model-based probabilistic diffusion framework for limitedangle ct reconstruction. In ICCV , 2023a. 9 Pengfei Liu, Weizhe Yuan, Jinlan Fu, Zhengbao Jiang, Hiroaki Hayashi, and Graham Neubig. Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. ACM Computing Surveys , 55(9):1-35, 2023b. 5 - Patrick Llull, Xuejun Liao, Xin Yuan, Jianbo Yang, David Kittle, Lawrence Carin, Guillermo Sapiro, and David J Brady. Coded aperture compressive temporal imaging. Optics express , 21(9): 10526-10545, 2013. 13 - Brendan McMahan, Eider Moore, Daniel Ramage, Seth Hampson, and Blaise Aguera y Arcas. Communication-efficient learning of deep networks from decentralized data. In AISTATS , 2017. 2, 4, 7 - Ziyi Meng, Jiawei Ma, and Xin Yuan. End-to-end low cost compressive spectral imaging with spatial-spectral self-attention. In ECCV , 2020. 1, 6, 9 - Ziyi Meng, Xin Yuan, and Shirin Jalali. Deep unfolding for snapshot compressive imaging. International Journal of Computer Vision , pages 1-26, 2023. 9 - Xin Miao, Xin Yuan, Yunchen Pu, and Vassilis Athitsos. λ -net: Reconstruct hyperspectral images from a snapshot measurement. In ICCV , 2019. 1, 9 - Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary DeVito, Zeming Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. Automatic differentiation in pytorch. In NeurIPS 2017 Workshop on Autodiff , 2017. 7 - Sashank Reddi, Zachary Charles, Manzil Zaheer, Zachary Garrett, Keith Rush, Jakub Koneˇ cn` y, Sanjiv Kumar, and H Brendan McMahan. Adaptive federated optimization. In ICLR , 2021. 9 - Nicolas Robidoux, Luis E Garcia Capel, Dong-eun Seo, Avinash Sharma, Federico Ariza, and Felix Heide. End-to-end high dynamic range camera pipeline optimization. In CVPR , 2021. 2, 9 Canh T Dinh, Nguyen Tran, and Josh Nguyen. Personalized federated learning with moreau envelopes. In NeurIPS , 2020. 9 - Idalides J Vergara-Laurens, Luis G Jaimes, and Miguel A Labrador. Privacy-preserving mechanisms for crowdsensing: Survey and research challenges. IEEE Internet of Things Journal , 4(4):855-869, 2016. 2 - Jiamian Wang, Yulun Zhang, Xin Yuan, Ziyi Meng, and Zhiqiang Tao. Modeling mask uncertainty in hyperspectral image reconstruction. In ECCV , 2022. 2, 3, 4, 7, 9 Jianyu Wang, Zachary Charles, Zheng Xu, Gauri Joshi, H Brendan McMahan, Maruan Al-Shedivat, Galen Andrew, Salman Avestimehr, Katharine Daly, Deepesh Data, et al. A field guide to federated optimization. arXiv preprint arXiv:2107.06917 , 2021. 2, 9 Zhou Wang, Alan C Bovik, Hamid R Sheikh, and Eero P Simoncelli. Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing , 13(4):600-612, 2004. 7 Shanshan Wu, Tian Li, Zachary Charles, Yu Xiao, Ziyu Liu, Zheng Xu, and Virginia Smith. Motley: Benchmarking heterogeneity and personalization in federated learning. In NeurIPS , 2022. 9 Jing Xu, Sen Wang, Liwei Wang, and Andrew Chi-Chih Yao. Fedcm: Federated learning with client-level momentum. arXiv preprint arXiv:2106.10874 , 2021. 9 Fumihito Yasuma, Tomoo Mitsunaga, Daisuke Iso, and Shree K Nayar. Generalized assorted pixel camera: postcapture control of resolution, dynamic range, and spectrum. IEEE transactions on image processing , 19(9):2241-2253, 2010. 6 Xin Yuan, David J Brady, and Aggelos K Katsaggelos. Snapshot compressive imaging: Theory, algorithms, and applications. IEEE Signal Processing Magazine , 38(2):65-88, 2021. 1 Yucheng Zheng, Yi Hua, Aswin C Sankaranarayanan, and M Salman Asif. A simple framework for 3d lensless imaging with programmable masks. In ICCV , 2021. 2, 9 ## A Appendix / supplemental material We provide more discussions and results of the proposed FedHP as follows - · Limitations discussion. (Section A.1). - · Broader impacts on the proposed FedHP. (Section A.2). - · More discussions on new hardware (Section A.3). - · Detailed algorithm of FedHP (Section A.4). - · More visualizations and analysis (Section A.5). - · More discussions on data privacy protection (Section A.6). - · More statistical analysis (Section A.7). ## A.1 Limitations One of the limitations of the proposed method is the lack of the real hardares due to the privacy concern. Thus it is hard for us to perform the federated learning on a large number of the clients as in other tasks like the classification, e.g. , C &gt; 100 . This in return, motivate us to solve the practical concerns of this field. We are working on collecting more real data and will continue exploring the power of the proposed method. ## A.2 Broader Impacts This work develops a federated learning treatment to enable the collaboration of the CASSI systems with different hardware configurations. The proposed method will practically encourage the crossinstitution collaborations with emerging optical system designs engaged. By improving the robustness of the pre-trained reconstruction software backend toward optical encoders, this work will help expedite the efficient and widespread deployment of the deep models on sensors or platforms. Table 5: Performance comparison between FedAvg and FedHP on CACTI ( e.g. , C = 3 ). | Methods | PSNR (dB) | SSIM | |-----------|----------------|-------------------| | FedAvg | 27.35 ± 1 . 22 | 0.9174 ± 0 . 0046 | | FedHP | 27.87 ± 0 . 89 | 0.9192 ± 0 . 0047 | ## A.3 New Hardware Our key technical contribution is to provide a new multi-hardware optimization framework adapting to hardware shift by only accessing local data. The principle underlying the proposed FedHP can be potentially extended to broad SCI applications. This work serves as a proof of concept to inspire future endeavors in a more general scope. Besides experimental results on CASSI, we also perform additional experiments by applying FedHP to another prevalent SCI system of Coded Aperture Compressive Temporal Imaging (CACTI) [Llull et al., 2013]. The results in Table 5 present a performance boost of FedHP over FedAvg baseline (under the same setting as the manuscript), demonstrating that the proposed FedHP does not particularly pertain to CASSI. ## A.4 Algorithm The learning procedure of proposed FedHP is provided in Algorithm 1. Let us take one global round for example, the learning can be divided into four stages. (1) Initializing the global prompt network from scratch and then distributing it to local clients. Then instantiating the client backbones with the pre-trained models upon the local training dataset. The adaptors are also randomly initialized for a better adaptation of the pre-trained backbones to the aligned input data representation. (2) Local updating of the prompt network, during which all the other learnable parameters in the system are kept fixed. (3) Local updating of the adaptors. Notably, the parameters of the adaptors is only updated and maintained in local. (4) Global aggregation of the local prompt networks. ## Algorithm 1 FedHP Training Algorithm Input: Number of global rounds T ; Number of clients C ; Number of client subset C ′ ; Pre-trained models θ p c , c = 1 , ..., C ; Number of local update iterations S p , S b ; Random initialized parameter of prompt network ϕ G ; Random initialized parameter of adaptors of c -th client ϵ c ; Learning rate α p of prompt network; Learning rate α b of adaptors; Output: ϕ G , ϵ c , c = 1 , ..., C ; 1: Server Executes; 2: Randomly choose a set of clients of number C ′ ; 3: for t = 1 , ..., T do 4: for c ∈ C ′ in parallel do 5: Send global prompt network ϕ G to ϕ c ; 6: ϕ c ← LocalTraining( θ p c , ϵ c , ϕ c ); 7: end for 8: ϕ G ← ∑ c = C ′ c =1 |D | c |D| ϕ c ; 9: end for 10: return ϕ G ; 11: LocalTraining( θ p c , ϵ c , ϕ c ); 12: for s = 1 , ..., S p do 13: ϕ c ← ϕ c -α p ∇ ℓ θ ( p c , ϵ c , ϕ c ) using ℓ c = 1 N ∑ N i =1 || f ( θ p c , ϵ c ; Y M c i +Φ( M c )) -X i || 2 2 ; 14: end for 15: for s = 1 , ..., S b do 16: ϵ c ← ϵ c -α b ∇ ℓ θ ( p c , ϵ c , ϕ c ) using ℓ c = 1 N ∑ N i =1 || f ( θ p c , ϵ c ; Y M c i +Φ( M c )) -X i || 2 2 ; 17: end for 18: return ϕ c to server; ## A.5 Visualizations In this section, we provide more visualization results of different methods. In Figs. 5 ∼ 6, we present the reconstruction results of different methods under the scenario of hardware shaking , i.e. , the data heterogeneity is naively induced from the different CASSI instances across clients. FedHP enables more fine-grained details retrieval. Besides, we compare the spectral density curves on selected representative spatial regions. The higher correlation to the reference, the better spetrum consistency with the ground truth. In Figs. 7 ∼ 9, we show additional real reconstruction results of FedAvg and FedHP on selected wavelengths. By comparison, FedAvg fails to reconstruct some content, while the proposed FedHP allows a more granular result. Figure 5: Reconstruction results on simulation data. The density curves compares the spectral consistency of different methods to the ground truth. We use the same coded aperture for all methods. <!-- image --> Figure 6: Reconstruction results on simulation data. The density curves compares the spectral consistency of different methods to the ground truth. We use the same coded aperture for all methods. <!-- image --> Figure 7: Visualization of reconstruction results on real data. Seven (out of 28 ) representative wavelengths are selected. We use the same unseen coded aperture for both FedAvg and FedHP. <!-- image --> In Figs. 8, we visualize the different distributions of coded apertures in distinct clients under the scenario of the distribution shift of coded apertures among different clients leads to the data heterogeneity among different local input dataset. This mimics a very challenging scenario where in different clients ( e.g. , research institutions), the corresponding CASSI systems source from different manufacturers. The proposed FedHP allows a potential collaboration among different institutions for the hyperspectral data acquisition for the first time despite the large distribution gap. By comparison, classic methods of FedProx [Li et al., 2020b] or SCAFFOLD [Karimireddy et al., 2020] fail to provide reasonable retrieval results. ## A.6 Data Privacy Protection FedHP inherently addresses privacy from different perspectives. (1) Hardware decentralization: In the FedHP framework, real hardware configurations ( e.g. , real masks) remain confidential to the local clients. This design makes it difficult to reverse-engineer the pattern or values of the real mask without direct sharing. (2) Raw data decentralization: FedHP maintains a private hyperspectral dataset for each client. The hyperspectral images are processed locally ( e.g. , encoding or data augmentation) and never leaves the client, thereby minimizing the risk of exposure. (3) Training process decentralization: FedHP only collects the local updates from the prompt network, which are then shared with the central server. The local updates are anonymized and aggregated without accessing underlying data, preventing any tracing back to the data source and thus protecting confidentiality. In Table 3, we quantitatively compared the performance of the proposed 'FedHP' and 'FedHP w/o FL' under privacy-constrained environments. FedHP demonstrates a dB average improvement, showcasing its robust model performance and offering a significant privacy advantage that aligns with regulations restricting data sharing. Figure 8: Coded aperture distributions across Clients 1 ∼ 3 under the scenario of manufacturing discrepancy . The symmetrical logarithm scale is employed for better visualization. <!-- image --> Figure 9: Visualization of reconstruction results on real data. Seven (out of 28 ) representative wavelengths are selected. We use the same unseen coded aperture for both FedAvg and FedHP. <!-- image --> ## A.7 Statistical Analysis We further conducted a statistical analysis using a paired t-test to compare the PSNR and SSIM values from FedHP and FedAvg. We define the hypotheses as follows: (1) Null hypothesis ( H 0 ): there is no significant difference in the PSNR and SSIM values between FedAvg and proposed FedHP. (2) Alternative hypothesis ( H a ): there is a significant difference in the PSNR and SSIM values between FedAvg and proposed FedHP. We calculated the differences based on the averaged PSNR and SSIM values for each scene from both FedAvg and FedHP, resulting in ten differences values for PSNR ( d PSNR ) and SSIM ( d SSIM ). We performed the paired t-test using t = ¯ d s d / √ n , where ¯ d denotes the mean of the difference values for either PSNR or SSIM, s d is the standard deviations, and n is the number of the paired observations. We calculated the p-value upon the t-distribution for a two-tailed test using the formula p-value = 2 × P T &gt; t ( | | ) , where P T &gt; t ( | | ) denotes the probability that a t-distributed random variable with n -1 degrees of freedom exceeds the absolute value of the observed t-statistic. For PSNR, we observe t = 2 50 . and p-value is 0 034 . . Since the p-value is less than the typical significance level of 0 05 . . Therefore, we reject the null hypothesis ( H 0 ) and conclude that there is a statistically significant difference between the PSNR values of FedAvg and FedHP. For SSIM, we observe t = 7 39 . and p-value is 0 00004 . . The p-value of is significantly less than 0 05 . , indicating a very strong statistically significant difference between the SSIM values of FedAvg and FedHP. The test results in PSNR and SSIM confirms that the performance gap between FedHP and FedAvg is statistically significant. ## NeurIPS Paper Checklist ## 1. Claims Question: Do the main claims made in the abstract and introduction accurately reflect the paper's contributions and scope? Answer: [Yes] Justification: The main claims made in the abstract and introduction accurately reflect the paper's contributions and scope. ## Guidelines: - · The answer NA means that the abstract and introduction do not include the claims made in the paper. - · The abstract and/or introduction should clearly state the claims made, including the contributions made in the paper and important assumptions and limitations. A No or NA answer to this question will not be perceived well by the reviewers. - · The claims made should match theoretical and experimental results, and reflect how much the results can be expected to generalize to other settings. - · It is fine to include aspirational goals as motivation as long as it is clear that these goals are not attained by the paper. ## 2. Limitations Question: Does the paper discuss the limitations of the work performed by the authors? Answer: [Yes] Justification: We discussed the limitations of the work performed by the authors in the supplementary. ## Guidelines: - · The answer NA means that the paper has no limitation while the answer No means that the paper has limitations, but those are not discussed in the paper. - · The authors are encouraged to create a separate "Limitations" section in their paper. - · The paper should point out any strong assumptions and how robust the results are to violations of these assumptions (e.g., independence assumptions, noiseless settings, model well-specification, asymptotic approximations only holding locally). The authors should reflect on how these assumptions might be violated in practice and what the implications would be. - · The authors should reflect on the scope of the claims made, e.g., if the approach was only tested on a few datasets or with a few runs. In general, empirical results often depend on implicit assumptions, which should be articulated. - · The authors should reflect on the factors that influence the performance of the approach. For example, a facial recognition algorithm may perform poorly when image resolution is low or images are taken in low lighting. Or a speech-to-text system might not be used reliably to provide closed captions for online lectures because it fails to handle technical jargon. - · The authors should discuss the computational efficiency of the proposed algorithms and how they scale with dataset size. - · If applicable, the authors should discuss possible limitations of their approach to address problems of privacy and fairness. - · While the authors might fear that complete honesty about limitations might be used by reviewers as grounds for rejection, a worse outcome might be that reviewers discover limitations that aren't acknowledged in the paper. The authors should use their best judgment and recognize that individual actions in favor of transparency play an important role in developing norms that preserve the integrity of the community. Reviewers will be specifically instructed to not penalize honesty concerning limitations. ## 3. Theory Assumptions and Proofs Question: For each theoretical result, does the paper provide the full set of assumptions and a complete (and correct) proof? Answer: [NA] Justification: The paper does not include theoretical results. ## Guidelines: - · The answer NA means that the paper does not include theoretical results. - · All the theorems, formulas, and proofs in the paper should be numbered and crossreferenced. - · All assumptions should be clearly stated or referenced in the statement of any theorems. - · The proofs can either appear in the main paper or the supplemental material, but if they appear in the supplemental material, the authors are encouraged to provide a short proof sketch to provide intuition. - · Inversely, any informal proof provided in the core of the paper should be complemented by formal proofs provided in appendix or supplemental material. - · Theorems and Lemmas that the proof relies upon should be properly referenced. ## 4. Experimental Result Reproducibility Question: Does the paper fully disclose all the information needed to reproduce the main experimental results of the paper to the extent that it affects the main claims and/or conclusions of the paper (regardless of whether the code and data are provided or not)? Answer: [Yes] Justification: We disclose all the information needed to reproduce the main experimental results of the paper in the supplementary. ## Guidelines: - · The answer NA means that the paper does not include experiments. - · If the paper includes experiments, a No answer to this question will not be perceived well by the reviewers: Making the paper reproducible is important, regardless of whether the code and data are provided or not. - · If the contribution is a dataset and/or model, the authors should describe the steps taken to make their results reproducible or verifiable. - · Depending on the contribution, reproducibility can be accomplished in various ways. For example, if the contribution is a novel architecture, describing the architecture fully might suffice, or if the contribution is a specific model and empirical evaluation, it may be necessary to either make it possible for others to replicate the model with the same dataset, or provide access to the model. In general. releasing code and data is often one good way to accomplish this, but reproducibility can also be provided via detailed instructions for how to replicate the results, access to a hosted model (e.g., in the case of a large language model), releasing of a model checkpoint, or other means that are appropriate to the research performed. - · While NeurIPS does not require releasing code, the conference does require all submissions to provide some reasonable avenue for reproducibility, which may depend on the nature of the contribution. For example - (a) If the contribution is primarily a new algorithm, the paper should make it clear how to reproduce that algorithm. - (b) If the contribution is primarily a new model architecture, the paper should describe the architecture clearly and fully. - (c) If the contribution is a new model (e.g., a large language model), then there should either be a way to access this model for reproducing the results or a way to reproduce the model (e.g., with an open-source dataset or instructions for how to construct the dataset). - (d) We recognize that reproducibility may be tricky in some cases, in which case authors are welcome to describe the particular way they provide for reproducibility. In the case of closed-source models, it may be that access to the model is limited in some way (e.g., to registered users), but it should be possible for other researchers to have some path to reproducing or verifying the results. ## 5. Open access to data and code Question: Does the paper provide open access to the data and code, with sufficient instructions to faithfully reproduce the main experimental results, as described in supplemental material? Answer: [Yes] Justification: The manuscript and the supplementary provides detailed information in reproduce the results. We claim to release the dataset, code, and pretrained models in the abstract. ## Guidelines: - · The answer NA means that paper does not include experiments requiring code. - · Please see the NeurIPS code and data submission guidelines ( https://nips.cc/ public/guides/CodeSubmissionPolicy ) for more details. - · While we encourage the release of code and data, we understand that this might not be possible, so 'No' is an acceptable answer. Papers cannot be rejected simply for not including code, unless this is central to the contribution (e.g., for a new open-source benchmark). - · The instructions should contain the exact command and environment needed to run to reproduce the results. See the NeurIPS code and data submission guidelines ( https: //nips.cc/public/guides/CodeSubmissionPolicy ) for more details. - · The authors should provide instructions on data access and preparation, including how to access the raw data, preprocessed data, intermediate data, and generated data, etc. - · The authors should provide scripts to reproduce all experimental results for the new proposed method and baselines. If only a subset of experiments are reproducible, they should state which ones are omitted from the script and why. - · At submission time, to preserve anonymity, the authors should release anonymized versions (if applicable). - · Providing as much information as possible in supplemental material (appended to the paper) is recommended, but including URLs to data and code is permitted. ## 6. Experimental Setting/Details Question: Does the paper specify all the training and test details (e.g., data splits, hyperparameters, how they were chosen, type of optimizer, etc.) necessary to understand the results? Answer: [Yes] Justification: The manuscript and the supplementary provides detailed information about the experimental setting/details. ## Guidelines: - · The answer NA means that the paper does not include experiments. - · The experimental setting should be presented in the core of the paper to a level of detail that is necessary to appreciate the results and make sense of them. - · The full details can be provided either with the code, in appendix, or as supplemental material. ## 7. Experiment Statistical Significance Question: Does the paper report error bars suitably and correctly defined or other appropriate information about the statistical significance of the experiments? Answer: [Yes] Justification: The results are accompanied by variances for the experiments that support the main claims of the paper. ## Guidelines: - · The answer NA means that the paper does not include experiments. - · The authors should answer "Yes" if the results are accompanied by error bars, confidence intervals, or statistical significance tests, at least for the experiments that support the main claims of the paper. - · The factors of variability that the error bars are capturing should be clearly stated (for example, train/test split, initialization, random drawing of some parameter, or overall run with given experimental conditions). - · The method for calculating the error bars should be explained (closed form formula, call to a library function, bootstrap, etc.) - · The assumptions made should be given (e.g., Normally distributed errors). - · It should be clear whether the error bar is the standard deviation or the standard error of the mean. - · It is OK to report 1-sigma error bars, but one should state it. The authors should preferably report a 2-sigma error bar than state that they have a 96% CI, if the hypothesis of Normality of errors is not verified. - · For asymmetric distributions, the authors should be careful not to show in tables or figures symmetric error bars that would yield results that are out of range (e.g. negative error rates). - · If error bars are reported in tables or plots, The authors should explain in the text how they were calculated and reference the corresponding figures or tables in the text. ## 8. Experiments Compute Resources Question: For each experiment, does the paper provide sufficient information on the computer resources (type of compute workers, memory, time of execution) needed to reproduce the experiments? Answer: [Yes] Justification: The computer resources has been reported. ## Guidelines: - · The answer NA means that the paper does not include experiments. - · The paper should indicate the type of compute workers CPU or GPU, internal cluster, or cloud provider, including relevant memory and storage. - · The paper should provide the amount of compute required for each of the individual experimental runs as well as estimate the total compute. - · The paper should disclose whether the full research project required more compute than the experiments reported in the paper (e.g., preliminary or failed experiments that didn't make it into the paper). ## 9. Code Of Ethics Question: Does the research conducted in the paper conform, in every respect, with the NeurIPS Code of Ethics https://neurips.cc/public/EthicsGuidelines ? Answer: [Yes] Justification: The research conducted in the paper conform, in every respect, with the NeurIPS Code of Ethics. ## Guidelines: - · The answer NA means that the authors have not reviewed the NeurIPS Code of Ethics. - · If the authors answer No, they should explain the special circumstances that require a deviation from the Code of Ethics. - · The authors should make sure to preserve anonymity (e.g., if there is a special consideration due to laws or regulations in their jurisdiction). ## 10. Broader Impacts Question: Does the paper discuss both potential positive societal impacts and negative societal impacts of the work performed? Answer: [Yes] Justification: The paper discuss both potential positive societal impacts and negative societal impacts of the work performed. ## Guidelines: - · The answer NA means that there is no societal impact of the work performed. - · If the authors answer NA or No, they should explain why their work has no societal impact or why the paper does not address societal impact. - · Examples of negative societal impacts include potential malicious or unintended uses (e.g., disinformation, generating fake profiles, surveillance), fairness considerations (e.g., deployment of technologies that could make decisions that unfairly impact specific groups), privacy considerations, and security considerations. - · The conference expects that many papers will be foundational research and not tied to particular applications, let alone deployments. However, if there is a direct path to any negative applications, the authors should point it out. For example, it is legitimate to point out that an improvement in the quality of generative models could be used to generate deepfakes for disinformation. On the other hand, it is not needed to point out that a generic algorithm for optimizing neural networks could enable people to train models that generate Deepfakes faster. - · The authors should consider possible harms that could arise when the technology is being used as intended and functioning correctly, harms that could arise when the technology is being used as intended but gives incorrect results, and harms following from (intentional or unintentional) misuse of the technology. - · If there are negative societal impacts, the authors could also discuss possible mitigation strategies (e.g., gated release of models, providing defenses in addition to attacks, mechanisms for monitoring misuse, mechanisms to monitor how a system learns from feedback over time, improving the efficiency and accessibility of ML). ## 11. Safeguards Question: Does the paper describe safeguards that have been put in place for responsible release of data or models that have a high risk for misuse (e.g., pretrained language models, image generators, or scraped datasets)? Answer: [NA] Justification: The paper poses no such risks. ## Guidelines: - · The answer NA means that the paper poses no such risks. - · Released models that have a high risk for misuse or dual-use should be released with necessary safeguards to allow for controlled use of the model, for example by requiring that users adhere to usage guidelines or restrictions to access the model or implementing safety filters. - · Datasets that have been scraped from the Internet could pose safety risks. The authors should describe how they avoided releasing unsafe images. - · We recognize that providing effective safeguards is challenging, and many papers do not require this, but we encourage authors to take this into account and make a best faith effort. ## 12. Licenses for existing assets Question: Are the creators or original owners of assets (e.g., code, data, models), used in the paper, properly credited and are the license and terms of use explicitly mentioned and properly respected? Answer: [Yes] Justification: The creators or original owners of assets (e.g., code, data, models), used in the paper, properly credited and are the license and terms of use explicitly mentioned and properly respected. ## Guidelines: - · The answer NA means that the paper does not use existing assets. - · The authors should cite the original paper that produced the code package or dataset. - · The authors should state which version of the asset is used and, if possible, include a URL. - · The name of the license (e.g., CC-BY 4.0) should be included for each asset. - · For scraped data from a particular source (e.g., website), the copyright and terms of service of that source should be provided. - · If assets are released, the license, copyright information, and terms of use in the package should be provided. For popular datasets, paperswithcode.com/datasets has curated licenses for some datasets. Their licensing guide can help determine the license of a dataset. - · For existing datasets that are re-packaged, both the original license and the license of the derived asset (if it has changed) should be provided. - · If this information is not available online, the authors are encouraged to reach out to the asset's creators. ## 13. New Assets Question: Are new assets introduced in the paper well documented and is the documentation provided alongside the assets? Answer: [Yes] Justification: The paper will release a new dataset of SSHD. We provide rich details about SSHD in the manuscript and the supplementary. ## Guidelines: - · The answer NA means that the paper does not release new assets. - · Researchers should communicate the details of the dataset/code/model as part of their submissions via structured templates. This includes details about training, license, limitations, etc. - · The paper should discuss whether and how consent was obtained from people whose asset is used. - · At submission time, remember to anonymize your assets (if applicable). You can either create an anonymized URL or include an anonymized zip file. ## 14. Crowdsourcing and Research with Human Subjects Question: For crowdsourcing experiments and research with human subjects, does the paper include the full text of instructions given to participants and screenshots, if applicable, as well as details about compensation (if any)? Answer: [NA] Justification: The paper does not involve crowdsourcing nor research with human subjects. ## Guidelines: - · The answer NA means that the paper does not involve crowdsourcing nor research with human subjects. - · Including this information in the supplemental material is fine, but if the main contribution of the paper involves human subjects, then as much detail as possible should be included in the main paper. - · According to the NeurIPS Code of Ethics, workers involved in data collection, curation, or other labor should be paid at least the minimum wage in the country of the data collector. ## 15. Institutional Review Board (IRB) Approvals or Equivalent for Research with Human Subjects Question: Does the paper describe potential risks incurred by study participants, whether such risks were disclosed to the subjects, and whether Institutional Review Board (IRB) approvals (or an equivalent approval/review based on the requirements of your country or institution) were obtained? Answer: [NA] Justification: The paper does not involve crowdsourcing nor research with human subjects. ## Guidelines: - · The answer NA means that the paper does not involve crowdsourcing nor research with human subjects. - · Depending on the country in which research is conducted, IRB approval (or equivalent) may be required for any human subjects research. If you obtained IRB approval, you should clearly state this in the paper. - · We recognize that the procedures for this may vary significantly between institutions and locations, and we expect authors to adhere to the NeurIPS Code of Ethics and the guidelines for their institution. - · For initial submissions, do not include any information that would break anonymity (if applicable), such as the institution conducting the review.
zw2K6LfFI9
PERIA: Perceive, Reason, Imagine, Act via Holistic Language and Vision Planning for Manipulation
Long-horizon manipulation tasks with general instructions often implicitly encapsulate multiple sub-tasks, posing significant challenges in instruction following. While language planning is a common approach to decompose general instructions into stepwise sub-instructions, text-only guidance may lack expressiveness and lead to potential ambiguity. Considering that humans often imagine and visualize sub-instructions reasoning out before acting, the imagined subgoal images can provide more intuitive guidance and enhance the reliability of decomposition. Inspired by this, we propose **PERIA**(**PE**rceive, **R**eason, **I**magine, **A**ct), a novel framework that integrates holistic language planning and vision planning for long-horizon manipulation tasks with complex instructions, leveraging both logical and intuitive aspects of task decomposition. Specifically, we first perform a lightweight multimodal alignment on the encoding side to empower the MLLM to perceive visual details and language instructions. The MLLM is then jointly instruction-tuned with a pretrained image-editing model to unlock capabilities of simultaneous reasoning of language instructions and generation of imagined subgoals. Furthermore, we introduce a consistency alignment loss to encourage coherent subgoal images and align with their corresponding instructions, mitigating potential hallucinations and semantic conflicts between the two planning manners. Comprehensive evaluations across three task domains demonstrate that PERIA, benefiting from holistic language and vision planning, significantly outperforms competitive baselines in both instruction following accuracy and task success rate on complex manipulation tasks.
https://openreview.net/pdf/2a39fcbdd8617cd0a7fbe9312a20b9b51ea8ab74.pdf
[ { "confidence": 4, "rating": 6, "review_id": "hd3aedGTvC", "review_text": "The paper proposes a framework that integrates large multimodal language models (MLLMs) and diffusion models to enable holistic language planning and vision planning for long-horizon robotic manipulation tasks with complex instructions. The authors jointly train the MLLM and diffusion model for language reasoning and visual imagination through latent image token generation. An explicit consistency loss aligns the reasoned instructions with the imagined subgoal images.\n\n1. Novel motivation for integrating of multiple modalities for providing better guidance.\n\n2. Principled design of the framework components like the encoding-side alignment and the latent image token generation approach.\n\n1. Weak experimental evaluation (see below questions).\n\n1. While the authors acknowledge that training and inference costs are significant, the current draft lacks a more in-depth analysis of these problems. In particular, what are the various tradeoffs associated with different MLLMs that can be used, taking into consideration training time/FLOPs/MACs? How does varying these choices impact performance? Experiments answering these questions are equally as important as the ablations being run on training design choices (e.g. alignment loss).\n\n2. Lack of real-world evaluation. Many works ([1], [2]) in this problem setting leveraging foundation models for robotic manipulation demonstrate the advantages of these large MLLMs/generative models in real-world settings, where the distribution of objects is extremely long-tailed. Can the authors show that PERIA can operate with similar success in this regime?\n\n[1] [Look Before You Leap: Unveiling the Power of GPT-4V in Robotic Vision-Language Planning](https://arxiv.org/abs/2311.17842)\n\n[2] [Zero-Shot Robotic Manipulation with Pretrained Image-Editing Diffusion Models](https://arxiv.org/abs/2310.10639)" }, { "confidence": 4, "rating": 6, "review_id": "yF6zORO4R5", "review_text": "The paper tackles the problem of long-horizon task planning on pick-and-place tasks in the Ravens domain. Given a dataset of trajectories, it first learns the projection to align the vision and language encoder for a multimodal LLM. Then it finetunes both the multimodal LLM and a diffusion model to generate a step action in language, where the diffusion model is used to generate a conditioning subgoal image, which is proposed as an intermediate step that helps with the step action generation in language.\n\n- The paper is overall well-written and the figures are helpful for understanding the method.\n\n- It is unclear, at least from the experiments in the paper, that the diffusion model is actually useful, especially when the output is still in language space. For example, it seems that the tasks studied in the paper can be easily tackled by a modern multimodal language model (likely even the open-sourced ones), by simply providing the the initial image and appropriate prompting. However, this is missing as an important baseline in the paper (and this does not require additional training data). Furthermore, to demonstrate the effectiveness of an image subgoal in addition to a language subgoal, the evaluation would have to be done on tasks that have subgoals that are difficult to describe in language but easy to describe in visual space, but all the evaluated tasks are the contrary.\n- A related work “Video Language Planning” also seems to be missing from the paper, despite it might involve closed-sourced models. However, the idea seems quite relevant and it’s unclear if the paper provides additional insights for the community.\n\nSee \"weaknesses\" section above." }, { "confidence": 4, "rating": 7, "review_id": "rVdP3LcARR", "review_text": "The paper proposes a holistic vision-language planning method for long-horizon robot manipulation, by learning a multi-modal large language model (MLLM). The MLLM generates interleaved language actions and keyframe images based on language goal and the initial image. Each pair of generated language and keyframe image is used as conditioning of a learned motion policy for robot manipulation.\n\nBased on a pretrained MLLM model, the paper first learns a projector to align visual encoding to with language on image captioning tasks tailored to robot manipulation. Then it applies instruction tuning to fine-tune the MLLM, an output projector, and a diffusion model to generate interleaved language and images. Additional, the authors propose another training objective to align the generated language and images. All large models are fine-tuned with LoRA.\n\nOn simulated robot manipulatio benchmarks, the proposed method outperforms imitation learning, language planning, and vision planning methods. The paper also systematically evaluates capabilities of the MLLM along different axes, and justifies the benefits introduced by each loss design via ablation studies.\n\n- The paper tackles the important challenge of robot long-horizon planning. The proposed method plans jointly in the language and image space, providing rich information for the low-level policy to condition on.\n- The paper exploits the capabilities of MLLM to generate language and images for robot manipulation, used with a separate low-level policy. I think this is good practice as MLLM is not naturally suitable to generate robot motion.\n- The experiments are comprehensive and provide useful information on understanding the capability of the trained MLLM.\n- The paper is in general well-written and easy to follow.\n\n- The explanation of low-level policy is missing from the main paper. This part is very important - the MLLM outputs language and images only, and it's not clear how these modalities are bridged with robot motion.\n- The contribution of the alignment loss between generated image and language is not sufficiently justified in the experiment. It will be helpful if the authors can provide the task success rate when the loss is absent.\n\n- I wonder which of the three pretraining tasks is the most important for vision-language alignment in the context of robot manipulation. It will be interesting if the authors can show some ablation studies on this." }, { "confidence": 5, "rating": 6, "review_id": "KqvZedn6p7", "review_text": "This paper focuses on robotic manipulation with complex instructions. It proposes PERIA, a framework that integrates MLLM and diffusion models to incorporate both language planning and visual planning for long-horizon language-instructed manipulation tasks. Specifically, PERIA first performs a lightweight multi-modal alignment to consolidate the multi-modal perception capabilities. Then, PERIA performs multi-modal instruction tuning, where it outputs both subgoal language descriptions and visual tokens, both of which are fed to a diffusion model to generate subgoal images. PERIA introduces an additional consistency loss between and generated subgoal image and language descriptions. Experimental results demonstrate that PERIA significantly outperforms competitive baselines.\n\n•\tThis work follows a natural and reasonable pipeline to tackle the manipulation tasks with complex language instructions. Combining language planning and visual generation for manipulation is a sound approach.\n\n•\tThe alignment stage empowers the overall capabilities, as demonstrated in the experimental part.\n\n•\tPERIA achieves convincing experimental results compared with previous works. The authors also conduct extensive ablative study to mine more insights.\n\n•\tEnd-to-end learning for such a large system requires considerable cost. Such a comprehensive framework may lead to powerful performances but the resources may be a limitation. This paper does not present how much resources PERIA uses or related experiments to address such potential concerns.\n\n•\tOne of my concerns is that the consistency objective, which forces the MLLM to output subgoal language descriptions, may suffer from accumulative error. This is because when the generated subgoal image is not the desired image but is a natural image that can be reached within one-step action, the MLLM would learn an incorrect subgoal description.\n\n•\tMore literature references and related baselines should be incorporated.\n\n•\tThe ablation in visual planning lacks an experiment where PERIA generates subgoal images with either subgoal descriptions or generated visual tokens, which should reveal more insights into what leads to the improvements in visual planning.\n\n•\tYou generate subgoal images with subgoal descriptions and generate visual tokens. Why not use 1) subgoal descriptions and observation or 2) generated visual tokens alone? The former resembles a world model, and the latter sounds like a decoding of an imagined visual subgoal, both of which sound more natural. I guess you have tried the latter but found it was not as good as adding subgoal language.\n\n•\tWhat LLM do you use? It is possible that a powerful LLM accounts for superior performance to some extent. Have you compared the LLMs of different works?" } ]
## PERIA: Perceive, Reason, Imagine, Act via Holistic Language and Vision Planning for Manipulation <!-- image --> Fei Ni 1 Jianye Hao 1 2 , ∗ Shiguang Wu 2 Longxin Kou 1 Jinyi Liu 1 Mingzhi Li 1 Yuzheng Zhuang 2 1 College of Intelligence and Computing, Tianjin University Yifu Yuan 1 Zibin Dong 1 Yan Zheng 1 ∗ 2 Huawei Noah's Ark Lab ## Abstract Long-horizon manipulation tasks with general instructions often implicitly encapsulate multiple sub-tasks, posing significant challenges in instruction following. While language planning is a common approach to decompose general instructions into stepwise sub-instructions, text-only guidance may lack expressiveness and lead to potential ambiguity. Considering that humans often imagine and visualize sub-instructions reasoning out before acting, the imagined subgoal images can provide more intuitive guidance and enhance the reliability of decomposition. Inspired by this, we propose PERIA PErceive, Reason, Imagine, Act ( ), a novel framework that integrates holistic language planning and vision planning for long-horizon manipulation tasks with complex instructions, leveraging both logical and intuitive aspects of task decomposition. Specifically, we first perform a lightweight multimodal alignment on the encoding side to empower the MLLM to perceive visual details and language instructions. The MLLM is then jointly instruction-tuned with a pretrained image-editing model to unlock capabilities of simultaneous reasoning of language instructions and generation of imagined subgoals. Furthermore, we introduce a consistency alignment loss to encourage coherent subgoal images and align with their corresponding instructions, mitigating potential hallucinations and semantic conflicts between the two planning manners. Comprehensive evaluations across three task domains demonstrate that PERIA, benefiting from holistic language and vision planning, significantly outperforms competitive baselines in both instruction following accuracy and task success rate on complex manipulation tasks. The details and visualizations are available at the homepage. ## 1 Introduction Recent advances in vision-language models (VLMs), such as BLIP [1] and LIV [2], enable openvocabulary visual recognition and multi-modal alignment, showing promise in robotic manipulation tasks specified human-provided language instructions [3, 4, 5, 6]. For semantically clear and concise instructions, such as "pick the red block on the green one", robotic agents can easily understand and complete the task in a single step using action primitives. However, when instructions become more general and complex, such as "stack the blocks as a pyramid and each layer in one color", the manipulation task can span long horizons and implicitly encapsulate multiple sub-tasks separated by action primitives, posing a major obstacle in instruction following. Current approaches often resort to decomposing complex instructions into manageable subtasks, either through language planning or vision planning based on the decomposed modality. Language planning, the more common approach, decomposes into progressive stepwise sub-instructions, which can be either predefined skill libraries ∗ Corresponding authors: [email protected], [email protected] Figure 1: Overview of PERIA (Perceive, Reason, Imagine, Act), inspired by the human cognitive process of following complex instructions, which involves perceiving environment and tasks, reasoning the required language plans, and imagining the intermediate subgoal images before acting. <!-- image --> in natural language [3, 7] or latent codebooks [8]. On the other hand, vision planning, a more recent development, decomposes complex instructions into coherent subgoal images as keyframes [9, 10], serving as visual milestones to provide more intuitive and expressive guidance for action execution. Language planning focuses on "how to act" and the sub-instructions outline the necessary procedural action process of the task completion, emphasizing the temporal dependencies and causal relationships between decomposed stepwise sub-instructions. On the other hand, vision planning concentrates on "what to act towards" and intuitive and grounded subgoal images with rich spatial and contextual information can enable robot agents to more easily understand what intermediate landmarks and visual anchors should achieve towards task completion. From a cognitive perspective, humans rely on a symbiotic operation of the brain's hemispheres [11], with the left primarily associated with logical language-based reasoning , and the right is linked to intuitive visual-based imagining . For humans, language planning and vision planning are often intertwined and performed simultaneously, involving either imagining the desired intermediate goals and then reasoning about the required plans to achieve them, or first reasoning out necessary stepwise plans and then imagining corresponding resulting images. Inspired by this, a natural question arises: Can we develop a framework that emulates this cognitive synergy by simultaneously performing language planning and vision planning for robotic manipulation tasks involving complex instructions just like humans? For this, we propose PERIA PErceive, Reason, Imagine, Act ( ), a novel framework that integrates multi-modal large language model (MLLM) and diffusion model to enable language-based reasoning and visual-based imagining respectively, leveraging holistic language planning and vision planning for long-horizon manipulation tasks with general complex instructions. Specifically, we first train the MLLM's perception ability by fine-tuning the encoder side's projection layer to align the text and vision modalities in the LLM's hidden layers in a lightweight manner, avoid the potential hallucinations and enhance the grounding ability. Next, we perform instruction tuning to simultaneously equip PERIA reasoning and imagination capabilities by explicitly adding additional image tokens after the reasoning phase and extracting rich latent image representations from MLLM to guide the generation of corresponding subgoal images. Moreover, We also introduce an alignment loss between reasoned sub-instructions and imagined subgoal images to enhance the consistency and accuracy of vision and language planning, jointly updated with generation and reasoning losses. In this way, vision planning provides a visualization of language planning, offering more intuitive guidance to avoid potential confusion. Language planning, in turn, provides reliable logical guidance at the semantic level for vision planning, preventing semantic conflicts in the generation of coherent image chains. The comprehensive evaluation across three typical long-horizon manipulation tasks demonstrates that PERIA enjoy the accuracy of instruction following and synergistic combination significantly improves decomposition accuracy and task success rate compared to existing methods that rely solely on either language or vision planning alone. The contributions of this work are as follows: - · We propose PERIA, a novel framework that integrates holistic language planning and vision planning, leveraging the logical and intuitive decomposition of general complex instructions. - · We encourage MLLM to output rich latent visual tokens to guide the diffusion model to generate images and further explicitly align between language instructions and visual subgoals, simultaneously developing the MLLM's reasoning and the diffusion model's imagination capabilities. - · PERIA demonstrates significant improvements in instruction following and task success rate on complex manipulation tasks compared to existing methods that rely on either language or vision planning alone, establishing a promising and inspiring paradigm for long-horizon manipulation. ## 2 Related Work ## 2.1 Hierarchical Planning for Long-horizon Manipulation Embodied manipulation tasks with general instructions often span multiple subtasks and long horizons, making direct end-to-end action prediction challenging due to compounding errors without intermediate guidance [12, 13, 14, 15]. Recent works adopt hierarchical planning, decomposing complex instructions into sequential sub-tasks to execute. Language planning like LISA [8] and Xskill [16] decompose the general instruction based on the latent skill codebook discovered during training. SayCan [3] and EmbodiedGPT [7] both leverage LLM to enable reasoning into sequential interpretable instructions in natural languages. Vision planning, a more recent development, decomposes complex instructions into sequential subgoal images. CoTDiffusion [10] utilizes diffusion models to translate multi-modal prompts into coherent subgoal images in a chain-of-thought manner, serving as visual milestones that are challenging to describe using language alone. While existing works rely solely on either language or vision planning, our PERIA framework enables simultaneous language and vision planning, harnessing the strengths of both approaches to provide a comprehensive, multi-modal guide that enhances the accuracy of instruction decomposition and following. ## 2.2 LLMfor Robotics Manipulation With the tremendous success of LLMs, there has been a surge in research exploring their capabilities for robotics manipulation, such as SayCan [3], Inner Monologue [17]. PAR [18] leverages a vision language model(VLM) as a captioner for visual observations and the generated captions are fed into LLMfor language planning. ViLA [19] and CoPA [20] follow a similar pipeline but replace LLM and VLMwith more advanced GPT4V [21] with stronger visual reasoning capabilities. EmbodiedGPT [7] employs a pre-trained open-sourced LLaMA model [22] as the language model for instruction tuning on collected robotics data, enhancing reasoning and planning capability specifically for embodied scenarios. PERIA introduces image generation as an additional supervision signal to encourage the MLLM to perceive more detailed visual details, reducing hallucinations and errors in reasoning. The generated images in vision planning also provide a more intuitive guide that further enhances the accuracy of instruction decomposition and improves instruction following performance. ## 2.3 Image Generation for Robotics Manipulation Inspired by recent development of recent text-to-image models [23, 24, 25], many works have begun to explore the visualization of manipulation tasks to guide robot action execution. LfVoid [26] enables the editing of original observations to obtain goal images based on natural language instructions to provide reward signals. SuSIE [9] similarly leverages an image-editing diffusion model to act as a high-level planner by proposing intermediate subgoals that a low-level controller can accomplish. LfVoid and SuSIE are limited to single-step sub-instructions, while CoTDiffusion [10] supports various instruction modalities and generates coherent subgoal image chains using a semantic alignment module. These works demonstrate that subgoal images can provide more detailed and intuitive guidance than language-only instructions. However, they do not incorporate LLMs for reasoning and are prone to failure and semantic conflicts without logical guidance. Our PERIA framework leverages the prior knowledge in MLLMs to assist in generating promising sequential images, enhancing consistency with complex task instructions and improving instruction following. ## 3 Method By leveraging MLLM and diffusion-based image editing models, PERIA enables holistic language planning and vision planning for stepwise language instructions and visual subgoal images, serving as language milestones and visual anchors to guide action execution in long-horizon tasks. We first introduce the lightweight alignment of language and vision modalities on the encoding side of Figure 2: Overview of PERIA . PERIA first learns to align the vision and language on encoding side of MLLM for perceiving. Then PERIA performs instruction tuning to MLLM jointly with diffusion model in an end-to-end manner to unlock the reasoning and generation ability for holistic language planning and vision planning. and show the module is trainable and frozen, respectively. <!-- image --> <!-- image --> the MLLM to achieve precise Perceive ability in Section 3.1. We then illustrate how to perform instruction tuning on the MLLM to enable Reason for language planning in Section 3.2 and how to jointly train with a diffusion model to Imagine coherent subgoal images aligned with corresponding instructions in Section 3.3. Moreover, we leverage an explicit alignment between instructions and images to achieve a synergistic effect between language and vision in Section 3.4. Since our focus is not on the low-level policy, please refer to Appendix E for the implementation details of Act . ## 3.1 Perceive: Encoding-side LLM-centric Multimodal Alignment To enable embodied robot agents to effectively perceive and comprehend visual scenes, a straightforward approach is to use an off-the-shelf Vision-Language Model (VLM) as an image captioner. However, the information bottleneck between the LLM and VLM limited by language modality, results in missing visual details, which is particular problematic in robotics manipulation tasks that require precise visual understanding. To address this limitation, we leverage image captioning as a training task for LLM-centric multimodal alignment to encourage visual representations compatible with the text feature space within the LLM, extending it to a MLLM to allow for a more precise and detailed comprehension of visual scenes. Specifically, we utilize the privileged information available in simulation environments to create a large-scale dataset of pairwise ground-truth data through three types of captioning tasks. First, the single-frame scene description scenario focuses on understanding a single observation frame, where the MLLM is tasked with providing a brief description covering aspects such as object recognition, size identification, number counting, color understanding, and spatial relationships. Second, action recognition between consecutive keyframes scenario presents the MLLM with two consecutive keyframes, requiring it to understand the visual difference between them and recognize the executed action, enhancing the perception of spatial relationships and action dynamics at the instruction level. Moreover, short demonstration understanding scenario involves processing a given short demonstration of frame sequences, strengthening the MLLM's temporal relationship understanding and grounding ability. These carefully designed captioning tasks with the high-quality training data, enable the MLLM to develop a strong foundation in visual perception and understanding for embodied manipulation tasks. Specifically, initialized from a pre-trained LLM, the MLLM contains a visual encoder V ( e.g., CLIPL [27]) to extract the visual features f , and an projection layer W to project f into the language modality. We follow the training of LLaVA [28] with cross-entropy loss (CELoss) as: <!-- formula-not-decoded --> where C can be the image caption for features alignment and l is number of word tokens. n is the numbers of the images I fed in MLLM, which can be differ in different captioning tasks. To perform the lightweight alignment, we freeze the weights of both the vision encoder V and LLM, and only update the parameters of W that encourage to map the image features into a shared latent space that is compatible with the MLLM's hidden representations. The alignment of visual and language modalities on the encoding side can effectively alleviates hallucinations and lays the foundational perception abilities for generating more grounded language and vision planning. For more detailed categorical analysis of improvement benefiting from captioning task, please refer to Appendix F.1. ## 3.2 Reason: Instruction Tuning for Language Planning With the initial coarse alignment of visual and language on the encoding side, we proceed to instruction tuning to encourage the MLLM to learn how to decompose complex instructions for language planning. These general task instructions T can be categorized into two types based on the modalities involved: 1) text-only instructions, such as "sort blocks into bowls according to the matching colors" which can be directly processed by the language encoder; and 2) multi-modal instructions that consist of interleaved language and images of a single object or whole observation, as suggested by VIMA-BENCH [13], which are more expressive and challenging to understand. For instance, consider an interleaved multi-modal prompt such as "Stack objects &lt;img&gt; in this order &lt;img&gt; &lt;img&gt;", where &lt;img&gt; serves as a placeholder for the corresponding images, which can be images of blocks or other objects in the observation. With the benefit of encoding-side alignment via several captioning scenarios across frames and videos, MLLM equipped with the input projection layer can handle multi-modal instructions including interleaved text, images, and even video frames. Then we design an instruction prompt P template such as: "Given the current observation &lt;img&gt; and the general task instruction [ T ], can you provide a brief and concise sub-instruction about how to act next?". We collect the stepwise language instructions E as the groundtruth response for the language planning task specified with the observation o , the prompt P and the general instruction T . The instruction tuning loss for language planning is defined as follows: <!-- formula-not-decoded --> where e are the word token of stepwise instruction, and v are the possible n images from multi-modal instruction. The text instruction with the &lt;img&gt; token and the corresponding image are processed by the aligned language encoder and image encoder respectively and then are unified fed into LLM for reasoning. The MLLM follows the standard auto-regressive training for the next token prediction and then can be regarded as a visual assistant for various tasks such as visual question answering. To perform instruction tuning, we fine-tune the MLLM using the LoRA technique [29] while keeping the encoding side frozen, including the visual encoder and its projection layer. Additionally, we employ two kinds of prompts to require MLLM to generate the next sub-instruction for the single step or all the stepwise instructions in order respectively. Two modes can be randomly switched during the instruction tuning and can effectively encourage MLLM to perform single-step and multi-step sequential language planning for closed-looped and open-looped control respectively. ## 3.3 Imagine: Decoding-side Synergistic Training for Vision Planning Considering the phrase a picture is worth a thousand words , subgoal images could provide higher expressive capabilities for conveying subtasks compared to sub-instructions with complex language only. Inspired by this, we integrated pre-trained conditional diffusion models to convert decomposed sub-instructions into coherent visual subgoal plans. While a natural approach would be to directly use the text instructions or captions as prompts for the image editing model, shown in Figure 3, relying solely on decoded text instructions as conditions may lead to an information bottleneck. The expressiveness of the instructions can be limited, and information loss may occur, as it is confined to the language modality. Inspired by [30, 31], to bridge the gap between the language and vision modalities, we introduce N special [IMG] tokens in the vocabulary codebook of the MLLM. These special tokens have trainable word embeddings and should be predicted after the generated language instructions jointly during the reasoning phase, shown in Figure 2. These appended visual tokens Figure 3: Three pipelines of MLLM for generation images. PERIA ( ) leverage visual tokens extracted from the MLLM during language planning serve as more expressive guidance for subgoal imagination compared to captions ( ) or decomposed instructions ( ) in language only. <!-- image --> [IMG] are treated as latent imagination of subgoal image from the MLLM and we employ an output image projection module R to transform them into actual visual guidance U for diffusion model: <!-- formula-not-decoded --> where w is the word embedding of language instructions and h is the hidden state from the last layer of MLLM before image projection layer of [IMG] , conditioned on learnable query embeddings q = { q 1 , ..., q L } , where L is the token numbers setting from the pre-trained diffusion model. The transformation over w can be seen as a general representation from language modality, while h represents a more grounded visual imagination that aligns with the language planning within the MLLM's reasoning. To simultaneously fine-tune the diffusion model and the MLLM, we employ the generation loss between the generated image and the groundtruth image. Our image editing model is based on latent diffusion, which learns the noise latent z t at the denoising timestamp t to reconstruct the groundtruth goal image. The generation loss is to learn the UNet ϵ θ that predicts the added noise based on the input image v and the visual imagination guidance U from the MLLM, formulated as: <!-- formula-not-decoded --> ## 3.4 Enhancing Consistency between Vision and Language Planning To further enhance the consistency between vision and language planning, we introduce an additional alignment objective between generated language instructions and visual images, as illustrated in Figure 2. Specifically, we feed both the generated image v t +1 and the current observation o t at planning step t into the MLLM and prompt it with understanding the differences between the two frames, which is exactly the action recognition captioning task in the perceive phase of PERIA Section 3.1. The response output ˜ E t generated by the MLLM is compared with the groundtruth stepwise language instruction E t for consistency, and can be formulated as the alignment consistency loss: <!-- formula-not-decoded --> <!-- formula-not-decoded --> The additional alignment task reinforces the synergy between vision and language planning, ensuring that generated subgoal images and text instructions are consistent and mutually informative, alleviating the compounding errors that may arise in long-horizon tasks due to inconsistencies. Vision planning provides a visualization of language planning, offering more intuitive guidance and reducing potential confusion or ambiguity. Conversely, language planning provides logical guidance at the semantic level for vision planning, preventing semantic conflicts during the generation of coherent image chains. This synergistic approach leverages the complementary strengths of vision and language, enabling PERIA to produce plans that are both visually grounded and semantically meaningful. Figure 4: The illustrating examples of holistic language and vision planning for general instructions, with stepwise sub-instructions and coherent subgoal images enhancing the instruction following. <!-- image --> ## 4 Experiments ## 4.1 Experiment Setup Benchmark &amp; Tasks To provide comprehensive evaluations, we conduct experiments across three typical long-horizon manipulation environments. More benchmark and task details are in Appendix A. - · LoHoRavens [18]: is a Ravens-based benchmark consisting of 11 long-horizon languageconditioned tasks categorized into Stacks , Sort , and Matching . Original tasks all involve manipulating Bowls&amp;Blocks and we additionally develop a more complex Letters scenario including 9 tasks of Shape , Orders , and Spell to further diversify the instruction and increase task difficulties. - · VIMA-BENCH [13], a benchmark for long-horizon manipulation, contains diverse tasks guided by multi-modal prompts. We choose 8 tasks from three representative categories Rearrange , Constraints , and Follows , specified by interleaved language and images of object or ultimate goal. Baselines To more clearly and comprehensively evaluate the effectiveness of different approaches, we categorize the baselines into three types based on their specific planning methods as follows: - · End-to-end : We choose CLIPort [12], one of the most widely used end-to-end languageconditioned imitation learning framework in Ravens-like manipulation benchmarks. CLIPort directly take the high-level language instructions as input to predict the action without a planner. - · Language Planning : We select several representative language planning methods that decompose general high-level instructions into stepwise instructions. LISA [8] trains a skill predictor to combine the discovered implicit skill codebooks for complex instructions. PAR [18] (PlannerActor-Reporter) replaces the latent skill planner with an LLM, using the VLM as a reporter for visual observations. The instruction and the generated captions are then fed into the LLM for language planning. EmbodiedGPT [7] follows a similar pipeline but replaces LLM and VLM with more advanced MLLM with stronger visual reasoning capabilities after instrution tuning. - · Vision Planning : SuSIE [9] incorporating a pretrained image-editing models to generate goal images for action prediction but only support simple single-step instructions. CoTDiffuison [10] leverage a semantic alignment module within the diffusion model to enable the sequential subgoal image generation for complex general instructions. For more details, please refer to Appendix C. ## 4.2 Main Quantitative Results of Success Rate We begin by comparing the performance of PERIA and baselines in solving long-horizon tasks across three typical task domains. The baselines can be categorized into three types of planners: e2e planner , language planner , and visual planner . As shown in Table 1, PERIA significantly outperforms other baselines in terms of success rate. As expected, the end-to-end learning method performs the worst due to the lack of intermediate guidance, making it difficult for the policy to follow general instructions for long-horizon tasks. In contrast, the visual planner paradigm, which explicitly Table 1: The evaluation of success rate between baselines and we report the mean and variance across 5 seeds. | | Blocks&Bowls | Blocks&Bowls | Blocks&Bowls | Letters | Letters | Letters | VIMA-BENCH | VIMA-BENCH | VIMA-BENCH | |----------------------|----------------------------------------------|----------------------------------------------|----------------------------------------------|----------------------------------------------|----------------------------------------------|----------------------------------------------|---------------------------------------------|---------------------------------------------|----------------------------------------------| | Model | Stacking | Sort | Matching | Shape | Orders | Spell | Rearrange | Follow | Constraint | | CLIPort | 18 . 4 ± 3 . 2 | 19 . 2 ± 4 . 6 | 17 . 8 ± 2 . 9 | 9 . 8 ± 1 . 4 | 8 . 1 ± 2 . 7 | 2 . 3 ± 0 . 8 | 5 . 8 ± 1 . 9 | 2 . 4 ± 0 . 6 | 8 . 3 ± 2 . 1 | | LISA PAR EmbodiedGPT | 26 . 6 ± 4 . 8 34 . 7 ± 5 . 5 48 . 6 ± 6 . 7 | 22 . 1 ± 3 . 5 32 . 8 ± 6 . 3 49 . 1 ± 5 . 9 | 23 . 0 ± 5 . 1 31 . 1 ± 4 . 4 43 . 4 ± 7 . 8 | 18 . 4 ± 2 . 6 31 . 5 ± 5 . 8 40 . 9 ± 6 . 4 | 16 . 1 ± 3 . 9 30 . 7 ± 4 . 9 48 . 2 ± 7 . 5 | 10 . 2 ± 1 . 7 27 . 3 ± 7 . 2 52 . 7 ± 6 . 2 | 8 . 9 ± 2 . 3 24 . 4 ± 6 . 1 38 . 3 ± 5 . 3 | 6 . 3 ± 1 . 5 16 . 1 ± 3 . 7 37 . 2 ± 4 . 7 | 11 . 9 ± 4 . 2 26 . 5 ± 4 . 6 43 . 5 ± 6 . 9 | | SuSIE CoTDiffusion | 34 . 1 ± 3 . 8 47 . 9 ± 6 . 0 | 32 . 6 ± 4 . 1 44 . 3 ± 7 . 6 | 33 . 2 ± 5 . 7 56 . 6 ± 5 . 2 | 37 . 8 ± 6 . 6 46 . 1 ± 6 . 5 | 35 . 2 ± 4 . 3 53 . 9 ± 4 . 8 | 34 . 1 ± 7 . 4 44 . 8 ± 7 . 9 | 37 . 9 ± 6 . 8 51 . 2 ± 6 . 3 | 40 . 2 ± 5 . 4 54 . 5 ± 7 . 3 | 51 . 2 ± 7 . 1 76 . 1 ± 5 . 6 | | PERIA (ours) | 63 . 9 ± 5 . 8 | 65 . 0 ± 6 . 4 | 72 . 3 ± 7 . 1 | 60 . 6 ± 5 . 2 | 65 . 2 ± 6 . 7 | 71 . 1 ± 7 . 5 | 74 . 8 ± 6 . 0 | 67 . 2 ± 7 . 8 | 89 . 3 ± 4 . 9 | decomposes tasks into stepwise instructions and employs a hierarchical framework consisting of a language planner and a language-conditioned policy, shows more promise and demonstrates a clear advantage over the end-to-end approach. Within the visual planner category, PAR and EmbodiedGPT both leverage the common sense knowledge from LLM and significantly outperform LISA, which uses a skill predictor for latent skill codebook rather than LLM. Furthermore, although both PAR and EmbodiedGPT are based on LLaMA, EmbodiedGPT employs a visual projector to expand the LLM to an MLLM for more precise perception and reasoning capabilities, while PAR applies a captioner to convert visual images into the language modality for reasoning, which may impact the accuracy of reasoning and task success rate to some extent. The visual planner paradigm, which generates intermediate keyframes, offers more intuitive guidance compared to language planning, and its advantage is more evident in VIMA-BENCH, where sub-tasks are challenging to describe sufficiently using language-only instructions. CoTDiffusion supports generating coherent subgoal images for complex instructions, resulting in performance gains compared to SuSIE. But CoTDiffusion does not explicitly reason about the instructions, which can lead to semantic inconsistencies in the generated subgoal images, causing it to still underperform compared to our algorithm. In contrast, our PERIA algorithm introduces an MLLM for explicit reasoning and generation, providing more sufficient and reliable intermediate guidance for instruction following in long-horizon tasks. ## 4.3 Further Analysis Accuracy of Language Planning We compare the accuracy of language planning with two evaluation metrics: the token accuracy which directly calculates the token-level matching rate between decomposed stepwise instructions and the groundtruth instructions, and the semantic similarity by calculating the embeddings distance of two instructions from pre-trained text encoder like CLIP. Our focus here is on generative language planning using LLMs and exclude LISA from this comparison. Table 2: Evaluation of reasoning accuracy between methods on two metrics. | Method | Token ↑ | Semantic ↑ | |-------------------------|-----------|--------------| | PAR | 58.2 | 0.63 | | EmbodiedGPT | 65.9 | 0.68 | | PERIA (ours) | 97 . 6 | 0 . 98 | | - w/o perceive pretrain | 80.2 | 0.83 | | - w/o vision planning | 83.7 | 0.79 | As illustrated in Table 2, PERIA demonstrates the highest accuracy in both token-level and semanticlevel comparisons. Although PAR introduces LLMs for language planning, it relies on an isolated, out-of-the-shell VLM as a captioner to convert visual observations into language descriptions, which may cause details missing during hard captioning. EmbodiedGPT further introduces a projection layer to bridge the gap between vision and language in the latent space, gaining more advantages in perception which is critical in language planning. Compared to EmbodiedGPT, PERIA's superior performance can be attributed to the explicit incorporation of vision planning. By jointly fine-tuning MLLM using the additional image generation loss, the supervision from visual aspects encourages promoting attention to visual details and spatial information for more grounded reasoning. When we remove the joint training of vision planning, we observe the more frequent hallucinations and errors in language planning, such as generating unseen objects with wrong colors, sizes, or locations, which significantly decreases the accuracy of language planning. Moreover, we also ablate the encoding-side multimodal alignment and the degradation in accuracy highlights the importance of enhancing the foundational perception capabilities through our carefully designed dataset, which includes various perception-related data such as spatial relationships, temporal relationships, size recognition, and color identification. To further investigate the improvement in foundational perception abilities, we conduct a detailed categorical analysis, which can be found in Appendix F.1. <!-- image --> <!-- image --> Figure 5: More detailed quantitative analysis. (a) The ablation studies on consistency loss and [IMG] token numbers. (b) The comparisons of three planning paradigms in tasks with various horizon lengths. (c) The evaluation of generalization ability of three levels. See text for further discussion. <!-- image --> Fidelity of Vision Planning We further compare the fidelity of generated goal images against groundtruth keyframes using the Fréchet Inception Distance (FID) [32] as the evaluation metric. Although SuSIE is not a strict vision planning method for long-horizon manipulation due to its limitation to simple single-step instructions, we grant it a relaxed privilege by providing oracle stepwise instructions to enable a comparison. Table 3: Comparisons of FID ( ↓ ) between methods on three task domains. | Methodology | Blocks | Letters | VIMA | |-----------------|----------|-----------|--------| | SuSIE (+oracle) | 18.9 | 18.1 | 19.4 | | CoTDiffusion | 13.1 | 15.8 | 17.6 | | PERIA (ours) | 10 . 2 | 13 . 5 | 11 . 4 | | - w/o alignment | 12.3 | 14.2 | 15.9 | However, as shown in Table 3, PERIA still demonstrates superiority, primarily due to the implicit generation of latent image tokens during language planning. The extracted image latent embeddings from MLLM retain more details and provide more sufficient guidance beyond language for subgoal image generation. CoTDiffusion supports general instruction inputs and can sequentially generate multiple images. However, the absence of explicit language planning in CoTDiffusion makes it challenging to ensure the semantic coherence of the generated images, potentially leading to dilemmas such as semantic repetition, jumping, or regression within the generated image sequences. In contrast, PERIA incorporates MLLM for reliable instruction decomposition and leverages the extracted image latent embeddings to achieve superior fidelity in vision planning compared to existing methods. Moreover, the performance drop in the ablation study without consistency loss highlights the importance of alignment between reasoned stepwise instructions and generated subgoal images, attributed to the synergistic combination of language planning and vision planning in our framework. Consistency between Reasoning and Imagining We leverage CLIP [27] to measure the imagelanguage similarity between generated instructions and images, with results presented in Figure 5a. The additional consistency alignment loss explicitly constraints and encourages semantic alignment between the imagined images from vision planning and the reasoned stepwise instructions from language planning, significantly enhancing the collaboration and consistency between the two modalities. Furthermore, increasing the number of [IMG] tokens provides more expressive and sufficient guidance, facilitating the MLLM in producing semantically coherent language and image tokens. However, the benefit of adding more tokens becomes marginal beyond a certain threshold. Effectiveness of Holistic Planning We modify the low-level policy model into several variants, including ones that simultaneously utilize stepwise instructions and subgoal images, as well as those that rely on each modality individually. As shown in Figure 5b, the holistic planning approach achieves the highest success rate than single planning, with the benefit of the increased amount of information available and rich multi-modal guidance for decision-making, which reduces the training difficulty of low-level policy and enhance the accuracy of action prediction. Moreover, the advantage of holistic planning becomes more evident as the horizon length increases, demonstrating its scalability and effectiveness in handling complex, long-horizon manipulation tasks. Generalization across Tasks We evaluate the generalization ability in three levels with increasing difficulty: placement generalization with novel placement of objects (L1), object generalization with novel objects (L2), and combinatorial generalization with extra novel instructions (L3). The results in Figure 5c demonstrate that PERIA enjoys a substantial advantage over other baselines, highlighting the importance of the common knowledge prior within the MLLM and diffusion model and holistic planning, which enhance the generalization and robustness for unseen challenging tasks. ## 5 Conclusion We propose PERIA SEe, Reason, Imagine, Act ( ), a novel framework that integrates MLLM and diffusion-based image editing models to enable holistic language and vision planning for long-horizon manipulation tasks with complex instructions. We first perform a lightweight multi-modal alignment to enhance the MLLM's fundamental perception capabilities of visual details for manipulation, alleviating potential hallucinations. Then, we encourage MLLM to output rich latent visual tokens to guide diffusion model in generating images and explicitly align language instructions with visual subgoals to simultaneously unlock MLLM's reasoning and diffusion model's imagination capabilities. Extensive evaluations across three challenging benchmarks demonstrate that PERIA significantly outperforms competitive baselines in both instruction following accuracy and task success rate, while also enjoying better generalization ability across tasks. We believe PERIA highlights the potential of holistic language and vision planning and we hope this novel paradigm can provide some insights to robotics manipulation research of long-horizon tasks with complex instructions in free-form, towards more open embodied scenarios. One current bottleneck is the relatively high time cost of training and inference. Improving the joint training efficiency of MLLMs and diffusion models in a lightweight manner and accelerating image generation sampling are interesting directions for future work. ## 6 Acknowledgements This work is supported by the National Natural Science Foundation of China (Grant Nos. 62422605, 92370132, 62106172), the National Key R&amp;D Program of China (Grant No. 2022ZD0116402) and the Xiaomi Young Talents Program of Xiaomi Foundation. ## References - [1] Dongxu Li, Junnan Li, and Steven CH Hoi. Blip-diffusion: Pre-trained subject representation for controllable text-to-image generation and editing. arXiv preprint arXiv:2305.14720 , 2023. - [2] Yecheng Jason Ma, William Liang, Vaidehi Som, Vikash Kumar, Amy Zhang, Osbert Bastani, and Dinesh Jayaraman. Liv: Language-image representations and rewards for robotic control. arXiv preprint arXiv:2306.00958 , 2023. - [3] Michael Ahn, Anthony Brohan, Noah Brown, Yevgen Chebotar, Omar Cortes, Byron David, Chelsea Finn, Chuyuan Fu, Keerthana Gopalakrishnan, Karol Hausman, et al. Do as i can, not as i say: Grounding language in robotic affordances. arXiv preprint arXiv:2204.01691 , 2022. - [4] Danny Driess, Fei Xia, Mehdi SM Sajjadi, Corey Lynch, Aakanksha Chowdhery, Brian Ichter, Ayzaan Wahid, Jonathan Tompson, Quan Vuong, Tianhe Yu, et al. Palm-e: An embodied multimodal language model. arXiv preprint arXiv:2303.03378 , 2023. - [5] Brianna Zitkovich, Tianhe Yu, Sichun Xu, Peng Xu, Ted Xiao, Fei Xia, Jialin Wu, Paul Wohlhart, Stefan Welker, Ayzaan Wahid, et al. Rt-2: Vision-language-action models transfer web knowledge to robotic control. In 7th Annual Conference on Robot Learning , 2023. - [6] Wenlong Huang, Chen Wang, Ruohan Zhang, Yunzhu Li, Jiajun Wu, and Li Fei-Fei. Voxposer: Composable 3d value maps for robotic manipulation with language models. arXiv preprint arXiv:2307.05973 , 2023. - [7] Yao Mu, Qinglong Zhang, Mengkang Hu, Wenhai Wang, Mingyu Ding, Jun Jin, Bin Wang, Jifeng Dai, Yu Qiao, and Ping Luo. Embodiedgpt: Vision-language pre-training via embodied chain of thought. Advances in Neural Information Processing Systems , 36, 2023. - [8] Divyansh Garg, Skanda Vaidyanath, Kuno Kim, Jiaming Song, and Stefano Ermon. Lisa: Learning interpretable skill abstractions from language. Advances in Neural Information Processing Systems , 35:21711-21724, 2022. - [9] Kevin Black, Mitsuhiko Nakamoto, Pranav Atreya, Homer Rich Walke, Chelsea Finn, Aviral Kumar, and Sergey Levine. Zero-shot robotic manipulation with pre-trained image-editing diffusion models. In The Twelfth International Conference on Learning Representations , 2023. - [10] Fei Ni, Jianye Hao, Shiguang Wu, Longxin Kou, Liu Jiashun, Yan Zheng, Bin Wang, and Yuzheng Zhuang. Generate subgoal images before act: Unlocking the chain-of-thought reasoning in diffusion model for robot manipulation with multimodal prompts. Computer Vision and Pattern Recognition , 2024. | [11] | Michael C Corballis. Left brain, right brain: facts and fantasies. PLoS biology , 12(1):e1001767, 2014. | |--------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | [12] | Mohit Shridhar, Lucas Manuelli, and Dieter Fox. Cliport: What and where pathways for robotic manipula- tion. In Conference on robot learning , pages 894-906. PMLR, 2022. | | [13] | Yunfan Jiang, Agrim Gupta, Zichen Zhang, Guanzhi Wang, Yongqiang Dou, Yanjun Chen, Li Fei-Fei, Anima Anandkumar, Yuke Zhu, and Linxi Fan. Vima: General robot manipulation with multimodal prompts. In NeurIPS 2022 Foundation Models for Decision Making Workshop , 2022. | | [14] | Suraj Nair, Eric Mitchell, Kevin Chen, Silvio Savarese, Chelsea Finn, et al. Learning language-conditioned robot behavior from offline data and crowd-sourced annotation. In Conference on Robot Learning , pages 1303-1315. PMLR, 2022. | | [15] | Longxin Kou, Fei Ni, Yan Zheng, Jinyi Liu, Yifu Yuan, Zibin Dong, and HAO Jianye. Kisa: A unified keyframe identifier and skill annotator for long-horizon robotics demonstrations. In Forty-first International Conference on Machine Learning . | | [16] | Mengda Xu, Zhenjia Xu, Cheng Chi, Manuela Veloso, and Shuran Song. Xskill: Cross embodiment skill discovery. In Conference on Robot Learning , pages 3536-3555. PMLR, 2023. | | [17] | Wenlong Huang, Fei Xia, Ted Xiao, Harris Chan, Jacky Liang, Pete Florence, Andy Zeng, Jonathan Tompson, Igor Mordatch, Yevgen Chebotar, et al. Inner monologue: Embodied reasoning through planning with language models. arXiv preprint arXiv:2207.05608 , 2022. | | [18] | Shengqiang Zhang, Philipp Wicke, Lütfi Kerem ¸enel, S Luis Figueredo, Abdeldjallil Naceri, Sami Haddadin, Barbara Plank, and Hinrich Schütze. Lohoravens: A long-horizon language-conditioned benchmark for robotic tabletop manipulation. arXiv preprint arXiv:2310.12020 , 2023. | | [19] | Yingdong Hu, Fanqi Lin, Tong Zhang, Li Yi, and Yang Gao. Look before you leap: Unveiling the power of gpt-4v in robotic vision-language planning. arXiv preprint arXiv:2311.17842 , 2023. | | [20] | Haoxu Huang, Fanqi Lin, Yingdong Hu, Shengjie Wang, and Yang Gao. Copa: General robotic manip- ulation through spatial constraints of parts with foundation models. arXiv preprint arXiv:2403.08248 , 2024. | | [21] | OpenAI. Gpt-4v(ision) system card. https://cdn.openai.com/papers/GPTV_System_ Card.pdf , 2023. | | [22] | Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288 , 2023. | | [23] | Aditya Ramesh, Prafulla Dhariwal, Alex Nichol, Casey Chu, and Mark Chen. Hierarchical text-conditional image generation with clip latents. arXiv preprint arXiv:2204.06125 , 2022. | | [24] | Chitwan Saharia, William Chan, Saurabh Saxena, Lala Li, Jay Whang, Emily Denton, Seyed Kamyar Seyed Ghasemipour, Burcu Karagol Ayan, S Sara Mahdavi, Rapha Gontijo Lopes, et al. Photorealistic text-to- image diffusion models with deep language understanding. arXiv preprint arXiv:2205.11487 , 2022. | | [25] | Fan Bao, Shen Nie, Kaiwen Xue, Chongxuan Li, Shi Pu, Yaole Wang, Gang Yue, Yue Cao, Hang Su, and Jun Zhu. One transformer fits all distributions in multi-modal diffusion at scale. arXiv preprint arXiv:2303.06555 , 2023. | | [26] | Jialu Gao, Kaizhe Hu, Guowei Xu, and Huazhe Xu. Can pre-trained text-to-image models generate visual goals for reinforcement learning? Advances in Neural Information Processing Systems , 36, 2023. | | [27] | Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, and Ilya Sutskever. Learning Transferable Visual Models From Natural Language Supervision. In International Conference on Machine Learning (ICML) , 2021. | | [28] | Haotian Liu, Chunyuan Li, Qingyang Wu, and Yong Jae Lee. Visual Instruction Tuning. In arXiv:2304.08485 , 2023. | | [29] | Edward J Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. Lora: Low-rank adaptation of large language models. arXiv preprint arXiv:2106.09685 , | | [30] | Tsu-Jui Fu, Wenze Hu, Xianzhi Du, William Yang Wang, Yinfei Yang, and Zhe Gan. Guiding instruction- based image editing via multimodal large language models. arXiv preprint arXiv:2309.17102 , 2023. | |--------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | [31] | Jing Yu Koh, Daniel Fried, and Russ R Salakhutdinov. Generating images with multimodal language models. Advances in Neural Information Processing Systems , 36, 2023. | | [32] | Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp Hochreiter. Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems , 30, 2017. | | [33] | Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In International conference on machine learning , pages 8748-8763. PMLR, 2021. | | [34] | Andy Zeng, Pete Florence, Jonathan Tompson, Stefan Welker, Jonathan Chien, Maria Attarian, Travis Armstrong, Ivan Krasin, Dan Duong, Vikas Sindhwani, et al. Transporter networks: Rearranging the visual world for robotic manipulation. In Conference on Robot Learning , pages 726-747. PMLR, 2021. | | [35] | Anas Awadalla, Irena Gao, Josh Gardner, Jack Hessel, Yusuf Hanafy, Wanrong Zhu, Kalyani Marathe, Yonatan Bitton, Samir Gadre, Shiori Sagawa, et al. Openflamingo: An open-source framework for training large autoregressive vision-language models. arXiv preprint arXiv:2308.01390 , 2023. | | [36] | Tim Brooks, Aleksander Holynski, and Alexei A Efros. Instructpix2pix: Learning to follow image editing instructions. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition pages 18392-18402, 2023. | | [37] | Oier Mees, Lukas Hermann, Erick Rosete-Beas, and Wolfram Burgard. Calvin: Abenchmark for language- conditioned policy learning for long-horizon robot manipulation tasks. IEEE Robotics and Automation Letters (RA-L) , 7(3):7327-7334, 2022. | | [38] | Ilya Loshchilov and Frank Hutter. Decoupled weight decay regularization. arXiv preprint arXiv:1711.05101 , 2017. | | [39] | Jiaming Song, Chenlin Meng, and Stefano Ermon. Denoising diffusion implicit models. arXiv preprint arXiv:2010.02502 , 2020. | | [40] | Wei-Lin Chiang, Zhuohan Li, Zi Lin, Ying Sheng, Zhanghao Wu, Hao Zhang, Lianmin Zheng, Siyuan Zhuang, Yonghao Zhuang, Joseph E. Gonzalez, Ion Stoica, and Eric P. Xing. Vicuna: An open-source chatbot impressing gpt-4 with 902023. | | [41] | Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. High-resolution image synthesis with latent diffusion models. In Proceedings of the CVPR , pages 10674-10685, 2022. | ## A Details of Benchmarks and Tasks To provide comprehensive evaluations, we conduct experiments across three typical long-horizon manipulation environments covering diverse instruction types and task types. ## A.1 Bowls&amp;Blocks LoHoRavens [18] is a benchmark dataset built upon the Ravens robot simulator, comprising ten long-horizon, language-conditioned tasks. The tasks are categorized into three types: Stacks , Sort , and Matching . In Stacks tasks, the objective is to place blocks in absolute or relative areas. Sort tasks require sorting blocks or bowls with similar specified attributes together. Matching tasks involve placing corresponding blocks into matching bowls. These tasks encompass various aspects of long-horizon reasoning, including color, size, space, arithmetic, and reference. To successfully complete each task, the robot must effectively combine multiple reasoning capabilities and develop an appropriate long-horizon plan. Move Move the blocks with some specified attributes like colors, sizes or locations to the specified area. We design 4 tasks as follows: - · MoveBlocktoArea: Move all the blocks to the {abs\_area}. - · MoveColorBlocktoArea: Move all the {color} blocks to the {abs\_area}. - · MoveBlockinAreatoArea: Move all blocks in {abs\_area} to {abs\_area}. - · MoveSizeBlocktoCorner: Move all {size} blocks to {position} corner. Move all blocks in right bottom area to the left top area. <!-- image --> Figure 6: The example in MoveBlockinAreatoArea in Blocks&amp;Bowls Move . Stack Stack blocks or bowls with specified attributes and put to some area together. We design 4 tasks as follows: - · StackAllBlocks: Stack all the blocks together. - · StackBlocksOfSameSize: Stack all the blocks of the same size. - · StackBlocksOfSameColor: Stack all the blocks of the same color. - · StackColorBlockstoArea: Stack all blocks of primary color on left side. Stack ALL Blocks of the same color together. <!-- image --> Figure 7: The example in StackBlocksOfSameColor in Blocks&amp;Bowls Stack . Matching Placing corresponding blocks into matching bowls or zones. We design 3 tasks as follows: - · PutBlockInMatchingBowl: Stack all the blocks together. - · PutBlockInMismatchingBowl: Stack all the blocks of the same size. - · PutBlockinZonewithMatchingColor: Put blocks of the same color in the zone with matching color. Put the blocks in the bowls with matched colors <!-- image --> Figure 8: The example in PutBlockInMatchingBowl in Blocks&amp;Bowls Matching . ## A.2 Letters In addition to the original LoHoRavens simulator, which consists of long-horizon tasks involving only Bowls&amp;Blocks , we have developed a novel Letter scenario to further diversify the range of long-horizon reasoning tasks. This scenario randomly generates various letters with different colors and cases. We have designed three new task types: Shapes , which select the letters with specified shapes and arrange it it together; Orders , which involves arranging letters in a specific order to test the robot's understanding of sequence and position; and Spell , which assesses the robot's capacity for letter combination and word spelling by requiring the robot to spell words that meet specific requirements. Shape Move the blocks with some specified attributes like colors, sizes or locations to the specified area. We design 3 tasks as follows: - · SortVerticalSymmBlockstoArea: Sort the vertically symmetrical letters to the bottom side. - · SortHorizontalSymmBlockstoArea: Sort the horizontal symmetrical letters to the blank space . - · SortCentralSymmBlockstoArea: Sort the central symmetrical letters to the corner. Sort the vertically symmetrical letters to the bottom side. <!-- image --> Figure 9: The example in SortVerticalSymmBlockstoArea in Letters Shape . Orders Stack blocks or bowls with specified attributes and put to some area together. We design 3 tasks as follows: - · PutLettersAlphabeticalOrder: Put the letters on the tables alphabetical order. - in - · PutLettersRevAlphabeticalOrder: Put the letters on the tables in reverse alphabetical order. - · SortConsLettersOrder: Sort the consonants from all letters in orders. <!-- image --> Sort all letters in alphabetical order. Figure 10: The example in PutLettersAlphabeticalOrder in Letters Orders . Spell Placing corresponding blocks into matching bowls or zones. We design 3 tasks as follows: - · SpellLongWords: Spell words that are as long as possible. - · SpellCSConfName: Spell out the name of a top CS conference. - · SpellTransName: Spell out the name of a common transportation. Spell a word about the transportation. <!-- image --> Figure 11: The example in SpellTransName in Letters Spell . ## A.3 VIMA-BENCH VIMA-BENCH [13]: a benchmark for long-horizon manipulation with general instruction specified by multi-modal prompts, containing various tasks ranging from simple object manipulation to multiobject manipulation. We select three kinds of representative long-horizon manipulation tasks Rearrange , Constraints , and Follows , in which general instructions are specified by interleaved language and images of object or ultimate goal. Rearrange Move the blocks with some specified attributes like colors, sizes or locations to the specified area. We design 2 tasks as follows: - · RearrangeObjtoGoal: Rearrange objects to this setup &lt;img&gt;. - · RearrangeObjtoGoalthenRestore: Rearrange objects to this setup &lt;img&gt; then restore. Figure 12: The example in RearrangeObjtoGoalthenRestore in VIMA-BENCH Rearrange . <!-- image --> Constraints Stack blocks or bowls with specified attributes and put to some area together. We design 4 tasks as follows: - · SweepNoExceedCons: Sweep all &lt;obj&gt; into &lt;container&gt; without exceeding &lt;constraint&gt;. - · SweepNoTouchCons: Sweep all &lt;obj&gt; into &lt;container&gt; without touching &lt;constraint&gt;. - · PutSameTextfromGoal: Put all objects with same texture as &lt;IMG&gt; into it . - · PutSameShapefromGoal: Put all objects with same shape as &lt;IMG&gt; into it. Figure 13: The example in SweepNoTouchCons in VIMA-BENCH Constraints . <!-- image --> Follow Placing corresponding objects following orders specified by several relevant images. We design 2 tasks as follows: - · FollowMotionObj: Follow this motion for &lt;obj&gt;: &lt;img 1&gt; ... &lt;img N&gt;. - · StackObjFollow: Stack objects in this order: &lt;img 1&gt; ... &lt;img N&gt;. Figure 14: The example in StackObjFollow in VIMA-BENCH Follow . <!-- image --> ## B Details of Datasets ## B.1 The Collection of Expert Demonstrations For the LoHoRavens and VIMA-Bench datasets, we utilize the provided oracle engines to collect expert demonstrations. It is worth noting that if there are multiple correct answers or multiple orders to complete the task, we only focus on whether the instruction-specified complex task is completed in the end and include all correct demonstrations as training data. Across all designed 28 tasks, we collect 2k demonstrations per task, gathering a total of 56k long-horizon demonstrations with horizons ranging from 2 to 10+ sub-tasks. For all the collected data, we divide it into an 80% proportion for the training dataset D train and 20% for the testing dataset. Among all the tasks, the Spell task in the Letters dataset requires additional explanation. Unlike other tasks where the instructions are relatively fixed, the Spell task's instructions are more flexible and diverse. For example, if the instruction asks to spell the name of a top computer science conference, we can solve this by maintaining a list of all top CS conferences in advance and checking all possible permutations of the given letters to find the answer, which is then executed by the oracle engine to render the expert data. However, for more diverse instructions, such as spelling the name of a food or city, the list may be extremely large, requiring the introduction of LLMs like GPT to assist in finding the answer for the oracle engine, which is a good direction to expand the Letters domain and enrich the task types, as future work. ## B.2 The Privileged Information Annotation of Datasets During the initialization process, we maintain a record of the assets used and annotate their corresponding attributes, enabling accurate identification of the color, size, and spatial relationships of the manipulated objects during each subtask's pick-and-place operation without the need for additional manual annotations or reliance on VLMs for captioning, which often have low accuracy without fine-tuning, even for models like GPT-4V. This privileged information from the simulator's underlying data allows us to construct a series of captioning tasks that help improve PERIA's foundational capabilities in visual perception and reasoning. During testing, access to the underlying environment information, such as the exact number of ground truth blocks or letters and their various attributes, is not possible. The model must directly perceive and ground these crucial visual details from the visual observations and perform subsequent reasoning, significantly increasing the task's difficulty. Table 4: Overview of three main task types, including Block&amp;Bowls Letters , and VIMA-BENCH . | Task Type | Description | Horizon | Color | Size | Spatial | Instruction | |---------------|-----------------------------------------------------------------|-----------|---------|--------|-----------|---------------| | Blocks &Bowls | Blocks &Bowls | | | | | | | | Move all the blocks to the [ABS POS] area | 4 ∼ 15 | | | | Text-only | | | Move all blocks of a color to the red zone | 2 ∼ 15 | | | | Text-only | | Move | Move all the blocks in the [ABS POS] area to the [ABS POS] area | 2 ∼ 15 | | | | Text-only | | | Move all the blocks on the corner/side | 4 ∼ 15 | | | | Text-only | | | Stack all the blocks | 4 ∼ 15 | | | | Text-only | | | Stack blocks of the same size. | 4 ∼ 15 | | | | Text-only | | Stack | Stack blocks in alternate colors. | 2 ∼ 15 | | | | Text-only | | | Stack only the primary color blocks on the left side. | 2 ∼ 12 | | | | Text-only | | | Put the blocks in the bowls with matching colors | 2 ∼ 12 | | | | Text-only | | Matching | Put the blocks in the bowls with mismatching colors | 2 ∼ 12 | | | | Text-only | | | Put blocks of the same color in the zone with matching color | 2 ∼ 12 | | | | Text-only | | Letters | Letters | | | | | | | | Sort the vertically symmetrical letters to the bottom side | 2 ∼ 15 | | | | Text-only | | Shape | Sort the horizontal symmetrical letters to the blank space | 2 ∼ 15 | | | | Text-only | | | Sort the central symmetrical letters to the corner | 2 ∼ 15 | | | | Text-only | | | Put the letters on the tables in alphabetical order | 2 ∼ 15 | | | | Text-only | | Orders | Put the letters on the tables in reverse alphabetical order | 2 ∼ 15 | | | | Text-only | | | Sort the consonants from all letters in orders | 2 ∼ 15 | | | | Text-only | | | Spell words that are as long as possible | 4 ∼ 15 | | | | Text-only | | Spell | Spell out the name of a top CS conference | 4 ∼ 10 | | | | Text-only | | | Spell out the name of a common transportation | 4 ∼ 15 | | | | Text-only | | VIMA-BENCH | VIMA-BENCH | | | | | | | | Rearrange the objects to this <IMG> | 2 ∼ 5 | | | | Multi-modal | | Rearrange | Rearrange the objects to this <IMG> then restore | 3 ∼ 10 | | | | Multi-modal | | | Sweep all <obj> into <container> without exceeding <constraint> | 2 ∼ 6 | | | | Multi-modal | | | Sweep two <obj> into <container> without touching <constraint> | 2 ∼ 9 | | | | Multi-modal | | Constraints | Put all objects with same texture as <IMG> into it | 2 ∼ 8 | | | | Multi-modal | | | Put all objects with same shape as <IMG> into it | 2 ∼ 8 | | | | Multi-modal | | | Follow this motion for <obj>: <IMG 1>...<IMG N> | 2 ∼ 8 | | | | Multi-modal | | Follow | Stack objects in this order: <IMG 1>...<IMG N> | 2 ∼ 8 | | | | Multi-modal | ## B.3 The Wordcloud of Language Instructions To visually summarize and showcase the frequency of all instructions, including object nouns, colors, sizes, and verbs, we create a word cloud visualization in Figure 15. We tokenize each instruction and record all the tokens from the language instruction for each skill code used in the trajectory. Once we have this mapping from skills to tokens, we can generate heat maps and word clouds. These word distributions effectively visualize the scope of the benchmarks, which focus on manipulating objects in human spaces by following general complex instructions in unpredictable scenarios. <!-- image --> (d) All Three Task Types Figure 15: World Cloud: We created the word cloud to visually summarize the key aspects covered by the diverse manipulation instructions across three tasks types . ## C Details of Baselines CLIPort CLIPort [12] is a popular end-to-end algorithm functioning as a language-conditioned imitation learning agent that directly takes in high-level language instructions without a planner. It combines the broad semantic understanding of CLIP [33] with the spatial precision of Transporter [34]. As an end-to-end baseline, we make no modifications to CLIPort, as its native SE(2) action space is well-suited for benchmarks like Ravens, which is one of the key factors contributing to the high data efficiency of Transporter and CLIPort. We train CLIPort by matching general instructions with pairwise actions for Block&amp;Bowls and Letters. To accommodate the multi-modal instructions in VIMA-BENCH, we make additional adaptations by directly borrowing the prompt tokenization mechanism from VIMA without further modification, the same with other baselines. Specifically, instead of operating on raw RGB images, VIMA adopts an object-centric representation by cropping objects from both prompt and observation images as object tokens sequences with pixel coordinate information. LISA We also compare with LISA [8], a hierarchical imitation learning framework that discovers implicit skills and learns to combine them for complex tasks. LISA learns diverse, interpretable primitive behaviors or skills from language-conditioned demonstrations to better generalize to unseen instructions. It employs vector quantization to learn discrete skill codes that are highly correlated with language instructions and the behavior of the learned policy. LISA can be considered a form of language planning, where the predicted instructions are in the form of skill codes. The low-level foundation model in LISA uses a decision transformer as its backbone and we retain the original implementation without any additional modifications. PAR PAR [18] (Planner-Actor-Reporter) is a paradigm that replaces the skill predictor with an LLM, using a VLM as a reporter for visual observations. The instruction and the generated captions are then fed into the LLM for language planning. In PAR, Llama 2 13B [22] and VLM OpenFlamingo [35] with few-shot prompting are employed as the Planner and Reporter, respectively. It is important to note that the Actor, or the low-level foundation model, is precisely the language-conditioned CLIPort trained by stepwise sub-instructions, as mentioned earlier. To ensure fair comparisons, we make no modifications and keep the low-level foundation model consistent across all other baselines with CLIPort as backbones, the same with PAR. EmbodiedGPT EmbodiedGPT [7] is a standard paradigm that incorporates an MLLM for language planning. The main difference between EmbodiedGPT and PAR lies in the replacement of the LLM+VLM combination with a more advanced MLLM, which possesses stronger visual reasoning capabilities. EmbodiedGPT trains the MLLM with the constructed embodied chain-of-thought dataset to enable the MLLM to perceive visual details in its hidden layers, similar to LLaV A [28]. To ensure fair comparisons, we make no modifications to the planning module and keep the low-level foundation model consistent across all other baselines, using CLIPort as the backbone, identical to the approach in PAR. SuSIE SuSIE [9] proposes a hierarchical framework that leverages an image-editing diffusion model to act as a high-level planner by proposing intermediate subgoals that a low-level controller can accomplish. It is worth noting that SuSIE is not a strict vision planning method for long-horizon manipulation, as it can only support relatively simple single-step instructions and falls short when it comes to complex general instructions. To enable a comparison, we grant SuSIE a relaxed privilege by providing oracle stepwise instructions, as it is limited to handling sub-instructions of a single step and cannot generate image chains for complex general instructions. SuSIE chooses InstructPix2Pix [36] as the pre-trained image-editing model and fine-tunes it with a dataset of language-labeled video clips and robot trajectories from CALVIN [37]. Since the image editing model is sensitive to training data, we find that its generation performance on the Ravens domain is limited. To address this, we perform additional fine-tuning, keeping the number of training iterations and dataset exactly the same with PERIA. CoTDiffusion CoTDiffusion [10] is a standard vision planning paradigm that supports translating general complex instructions, including text-only or multi-modal prompts, into visual subgoal images in a chain-of-thought manner. Compared to SuSIE, the most significant difference lies in CoTDiffusion's explicit design of a semantic alignment module within the diffusion model to capture the correspondence and semantic completion between the generated images and the general instruction, enabling chain-of-thought generation. Similar to SuSIE, we fine-tune CoTDiffusion on our collected dataset and employ the same low-level image-conditioned policy as SuSIE, which is the image-conditioned variant of CLIPort. However, since CoTDiffusion does not explicitly introduce an LLM for planning, it may still encounter semantic conflicts during the vision planning process, such as repetition, backtracking, or skipping steps. ## D Details of High-level Planner Learning ## D.1 Pretraining of Perceiving Stage In the initial pretraining stage, PERIA aims to acquire vision-language knowledge and alignment between vision and LLM from a large collection of aligned image-text pairs. The designed captioning task for alignment of visual and language modalities in the encoding side is crucial for effective understanding and reasoning about visual scenes, bridging the gap between perception and reasoning in manipulation tasks and laying the foundation for the subsequent development of reasoning and imagination abilities in PERIA. We choose ViT-B-32 2 as the visual encoder and Vicuna-7B 3 as our LLM backbone. For the input visual projection, we opt for a simple linear projection module, as we found that more complex architectures like Q-former from BLIP2 [1] yield similar performance. The linear projection consists of three layers with a hidden size of 4096. We regard the output from the injected projection layer as a soft prompt for the LLM, prompting it to generate the corresponding ground-truth texts. Throughout the entire pretraining process, both the pre-trained vision encoder and the LLM remain frozen, with only the linear projection layer being fine-tuned. To perform the lightweight alignment, we freeze the weights of both the vision encoder and LLM, and only update the parameters of the projection module to encourage mapping the image features into a shared latent space that is compatible with the MLLM's hidden representations. Specifically, we train with a batch size of 64, using 8 V100 Nvidia GPUs for parallel training in 8 hours. We employ the AdamW optimizer [38] with a learning rate of 2e-4, a linear warmup of 1k steps, and a weight decay of 0.01. ## D.2 Joint Training of Reasoning and Imagining The output projection layer adopts a transformer-based architecture characterized by a hidden size of 512, 4 attention heads, 4 encoder layers, and 4 decoder layers. The latent image token embeddings and the word embedding are fed into the output projection layer and map the latent image representation into the latent space of the diffusion model. We train the MLLM and diffusion model jointly using instruction-following datasets and incorporate LoRA [29] to fine-tune the weights of the LLM to achieve lightweight supervised fine-tuning. For the diffusion-based image editing model, we choose the finetuning pipeline borrowed from Instruct-Pix2Pix [36], the most widely used pipeline for conditional image editing tasks. We train for 50k steps with a batch size of 16, using 8 V100 Nvidia GPUs for parallel training over 42 hours. We employ the AdamW optimizer [38] with a learning rate of 1e-4, a linear warmup of 1k steps, and a weight decay of 0.01. We track an exponential moving average (EMA) of the model parameters with a decay rate of 0.999 and use the EMA parameters at test time. The strength of classifier-free guidance ω is set to 2.0, and we use the DDIM sampler [39] with 50 sampling steps. Empirically, we find that the pretraining stage of perception is crucial. Without the encoding pretraining in the perception stage, the convergence time of the subsequent decoding side significantly increases, and the performance deteriorates. During the joint training of MLLM and the diffusion model, we observe that the loss convergence of the reasoning stage is often faster than that of the generation stage. One major reason could be that image editing is more challenging, requiring more details compared to language planning. We did not extensively tune the ratios between the image loss, generation loss, and consistency loss. Instead, we simply added them together considering the similar scaling ranges among them. Investigating the optimal weighting of these loss components could potentially further improve the synergy between language and vision planning, but we leave this exploration for future work. Furthermore, we notice that when the image loss of the diffusion model approaches convergence, continued training, although not resulting in a significant decrease in loss, 2 https://huggingface.co/sentence-transformers/clip-ViT-B-32 3 https://huggingface.co/lmsys/vicuna-7b-v1.5 , 7B, version 1.5 notably improves the fidelity of the generated images during evaluation. Therefore, after reaching a certain level, we turn off the gradients of the LLM and keep only the gradients of the diffusion model enabled, which can further accelerate the convergence speed of the generation loss without affecting the overall reasoning quality. However, we have not further investigated the specific relationship, as it is not our main focus, but it presents an interesting research direction that we will explore in future work. The summary of architecture and the parameters are listed in Table 5 as follows: Table 5: The overall configuration and training pipeline of two training phases for MLLM and diffusion model. | Training Stage | Vision Encoder | Vision Encoder | Input Projection | Input Projection | LLM | LLM | Output Projection | Output Projection | Diffusion | Diffusion | |------------------|------------------|------------------|--------------------|--------------------|-------------|-------|---------------------|---------------------|-------------|-------------| | | Name | Param | Name | Param | Name | Param | Name | Param | Name | Param | | Perceiving | ViT-B-32 | 87M | Linear | 18M | Vicuna [40] | 7B | - | - | - | - | | Reason& Imagine | ViT-B-32 | 87M | Linear | 18M | Vicuna [40] | 7B | Transformer | 31M | SD [41] | 1.3B | ## E Details of Low-level Policy Learning In the final phase of the PERIA framework, we focus on training the low-level policy model to develop its capability to act, enabling the effective execution of the generated language and vision plans from the previous phases. We adopt CLIPort, a widely used end-to-end learning algorithm for Ravens, as its native SE(2) action space is well-suited for benchmarks like Ravens, which is one of the key factors contributing to the high data efficiency of Transporter and CLIPort. CLIPort has two variants: a language-conditioned policy and an image goal-conditioned policy. We train these variants with stepwise language sub-instructions and coherent keyframes as inputs, respectively, allowing them to serve as the low-level foundation policy models for language planning and vision planning. For a fair comparison, our planning-based methods, whether language planning or vision planning, use CLIPort as the backbone for the low-level foundation model. To accommodate the simultaneous presence of stepwise language sub-instructions and coherent keyframes from vision planning and language planning, we design a variant that is conditioned on both image and sub-instruction simultaneously. We combine the two representations through a cross-attention block with a 4-layer lightweight attention layer, 4 cross-attention heads, and an embedding dimension of 768 as the fusion layer. We sample the oracle action trajectories a , current observation o , stepwise instruction e , and pair-wise subgoal images v from the D train as mini-batch B . The action ˆ a is predicted with the instruction e and corresponding v simultaneously. The low-level policy ψ is updated on mini-batch B according to the following loss: <!-- formula-not-decoded --> We use the AdamW optimizer [38] with a learning rate of 1e-4, a linear warmup of 500 steps, and a weight decay of 0.01. We train with a batch size of 64 for 10k steps on a single V100 Nvidia GPU, which takes 12 hours. The policy model is conditioned on both the generated subgoal images and the reasoned language instructions, reducing the training difficulty from two perspectives: first, by providing more decision-making information, and second, by shortening the prediction horizon for action sequences. Thanks to the explicit subgoal generation from the high-level visual planner, the low-level policy model is not required to master complex multi-step manipulation skills over long horizons. Furthermore, the stepwise instructions from language planning and the subgoal image chains from vision planning enable the policy model to be trained without directly conditioning the general instruction to predict the entire long-horizon action sequence. Instead, the policy model can predict action sequences in segments, effectively reducing the complexity of the policy learning task by leveraging the structured guidance provided by the language and vision planning components. ## F More Results and Analysis of Additional Experiments ## F.1 The Effect of Encoding-side Alignment To effectively improve the perceiving capabilities of the MLLM and lay a solid foundation for grounded reasoning and imagination, we introduce encoding-side alignment. We design a series of visual question answering (VQA) tasks to categorize the perception capabilities into five fundamental sub-skills: object recognition, color recognition, size identification, number counting, and spatial relationship understanding. For each fundamental perception capability, we design several targeted questions. The question templates are detailed in Table 6, and can be divided into two types based on the answer format: Yes/No questions and open-ended questions. We feed the questions and corresponding visual images into the MLLM for evaluation, prompting the MLLM to generate language-only answers. In this study, we use GPT-4 and compare its generated responses with the ground-truth answers and give a evaluation of the semantic similarity and correctness ranging from 1-5, with 1 being the lowest score, indicating an incorrect answer, and 5 representing the highest similarity, indicating the most accurate and contextually appropriate answer. Notably, we find that GPT-4's similarity assessments are highly accurate and closely match expert human evaluations, allowing us to directly employ it as a scoring mechanism. This approach enables automated large-scale evaluation without the need for extensive crowdsourced annotation resources, making the assessment process more efficient and cost-effective. We attempted to use GPT-4 for more fine-grained scoring (for example, scoring from 0-100), but found that the consistency with human evaluations was not as good as the 1-5 scale. For each term of fundamental perception capability, we randomly select 100 cases from the 28 tasks dataset used in the paper for evaluation. We calculate the total score and normalize it by the maximum possible score (28 task * 100 cases * 5 score) to obtain a percentage score, which is presented in Figure 16. Table 6: Overview of five types of fundamental perception capabilities. The question templates are illustrated and can be categorized into two types: open-ended and Yes/No questions. | Foundation Capability | Questions | Answer Type | |-------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------------------------------------------| | Object Recognition | Can you identify all the objects on the table? Are there any objects that are movable on the table? How many different types of objects are on the table? Does the letter appear on the table? What are the colors of the blocks? | Open-ended Yes/No Open-ended Yes/No Open-ended | | Color Recognition | Is the robot manipulating any of the red blocks? How many colors are there in blocks on the table? Are the colors of the blocks on the desktop duplicated? Can you tell the size of all the blocks on the table? | Yes/No Open-ended Yes/No | | Size Identification | Are the blocks the same size? How many different-sized blocks? Are the blocks relocated by the robot identical in size? | Open-ended Yes/No Open-ended Yes/No | | Number Counting | How many blocks are on the table? Which item has the highest number of different objects? Can you identify the number of blocks in bowls? How many objects were moved in total in a demonstration? | Open-ended Open-ended Open-ended Open-ended | | Spatial Relationship | Which corners of the table have no objects? Are there objects stacked on top of each other? Which area of the table has the most objects? How many layers is the highest stack of objects? | Open-ended Yes/No Open-ended Open-ended | Our results in Figure 16 demonstrate that carefully designed captioning tasks can significantly enhance the MLLM's performance across various foundational perceiving capabilities. When we remove the perception pretraining with encoding-side alignment, we observe more frequent hallucinations and errors in the designed VQA evaluations. It is worth noting that even the ablated version of perception pretraining still shows some advantage over EmbodiedGPT, which can be attributed to the additional image generation loss. The supervision from visual aspects encourages attention to visual details and spatial information for more grounded reasoning. Figure 16: The evaluation of fundamental perception capabilities between language planning methods with MLLMs. <!-- image --> ## F.2 The Effect of Joint Training To verify the effectiveness of language planning and vision planning, we conduct an ablation study by decoupling the training of reasoning and imaging. First, we train the MLLM solely for reasoning, generating text tokens to predict stepwise instructions. Subsequently, we use only the text tokens from the stepwise instructions as conditional information to train the image editing model. It is important to note that this differs from the visual fidelity experiment mentioned in the main text, as the &lt;IMG&gt; tokens are entirely set to zero, and the absence of joint training eliminates the consistency loss. We introduce a semantic similarity metric to evaluate instruction following accuracy. Specifically, we calculate the CLIP similarity between generated subgoal images and general prompts, normalized by the CLIP score between the ground truth ultimate goal image and prompts. This metric reflects the progress of generated subgoal images throughout the entire chain, tracking the instruction following and gradual advancement towards ultimate goals specified by complex instructions. Figure 17: The evaluation of the normalized CLIP scores between instructions and generated subgoal images for each generation step, reflecting the stepwise accuracy of instruction following and the incremental progress towards ultimate goals specified by complex instructions. <!-- image --> To ensure a consistent comparison, we select tasks with a horizon length of 4 across all task types. The results in Figure 17 show that CoTDiffusion has the worst semantic alignment due to the lack of explicit incorporation of LLMs for logically reliable reasoning. The results reveal that the generated images from the decoupled training version exhibit relatively poor instruction following compared to the jointly trained version. We attribute this to two main reasons. First, using only text instructions as conditioned information fails to provide sufficient guidance, which can be considered as a version with 0 &lt;IMG&gt; tokens. Second, the absence of a consistency loss constrains and encourages semantic alignment between the generated subgoal images and instructions. In summary, the performance drop caused by decoupled training highlights the benefits of joint training, which enables a synergistic effect, more fine-grained and consistent image generation, and instruction reasoning. It is worth noting that the performance drop caused by decoupled training is more significant on the VIMABENCH, highlighting the importance of latent image token embeddings in providing guidance beyond language, especially in task environments where text-only instructions are challenging to describe sufficiently and completely. ## F.3 The Flexibility of LLM Backbones To comprehensively compare the impact of different LLMs as backbones on the capabilities of the PERIA framework, we experiment with various LLMs backbones for fine-tuning, including Vicuna7B 4 , Vicuna-13B 5 , LLaMA-2-7B 6 , and LLaMA-3-8B 7 . The evaluation results for each model are presented in Figure 18. The results show that Vicuna-13B outperforms Vicuna-7B, indicating that larger model sizes can bring performance gains. However, the more recent and powerful LLaMA3-8B surpasses both Vicuna models, demonstrating that our framework can achieve substantial improvements and enhancements by leveraging stronger LLM backbones. Figure 18: The evaluation of PERIA with different LLM backbones across three task types. <!-- image --> ## G Quick Guideline of Usage ``` from PERIA import load_peria llm_backbone = ['Vicuna-7B', 'LLaMA2-7B', 'Vicuna-13B', 'LLaMA3-8B'] peria = load_peria(llm_backbone) low_level_fdm = load_fdm('fdm_path') observation = load_obs('obs_path') task_instruction = load_ins('ins_path') prompt = load_prompt('prompt_path') while not done: stepwise_instruction = peria.language_planning(observation, task_instruction, prompt) subgoal_image = peria.vision_planning(observation, task_instruction, prompt) action = low_level_fdm(observation, stepwise_instruction, subgoal_image) observation, done = env.step(action) ``` 4 https://huggingface.co/lmsys/vicuna-7b-v1.5 , 7B, version 1.5 5 https://huggingface.co/lmsys/vicuna-13b-v1.5 , 13B, version 1.5 6 https://huggingface.co/meta-llama/Llama-2-7b-chat-hf , 7B 7 https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct , 8B ## H Pesudocodes of Framework Algorithm 1 The training of PERIA for robotics manipulation. Input : Training dataset with pair-wised keyframesstepwise sub-instructions and action trajectories, MLLM, conditional diffusion model and low-level foundation model for action planning. ## Perceive: Encoding-side Alignment between Vision and Language ## for each iteration do Sample images I = { v , v 1 2 , ..., v n }} and pairwised caption C = { x , x 1 2 , ..., x l } from the D train as mini-batch B Calculate the projected visual tokens W ( f = V I ( ))) by projection layer W after the visual encoder V the encoding side. Fed the text token from prompt and the projected visual tokens into LLM jointly and infer caption ˆ x t in an autoregressive way: ˆ = x t MLLM ( { x , ...x 1 t -1 } , prompt | W ( f = {V ( v i ) } n i =1 )) Update parameters of W on mini-batch B according the following loss: L Perceive = ∑ l t =1 CELoss ( ˆ x , x t t ) end for ## Reasoning and Imagine: Decoding-side Joint training for MLLM and Diffusion Model for each iteration do Sample general task instructions T , initial observation image o , prompt P , pairwised sub-instructions E = { e , e 1 2 , ..., e l } and groundtruth subgoal images I = { v , v 1 2 , ..., v n }} from the D train as mini-batch B Fed the text token from general task instructions T and prompt P and the projected visual tokens from o into LLM jointly and reason the stepwise sub-instruction e ′ t : e ′ t = MLLM ( { e , ..., e 1 t -1 } | [ P T , , W ( f = V ( o )))] , calculate the reasoning loss as follows: L Reason = ∑ CELoss ( e , e t Extract the word embedding w [IMG] from append [IMG] token after the reasoning phase, and extract the hidden state h [IMG] from the last layer within MLLM . l t =1 ′ t ) Transform w [IMG] and h [IMG] into actual visual guidance U via image projector R Generate imagined subgoal images v via image editing diffusion model with conditional guidance U , and calculate the imagine loss as follows: L Imagine = E o,v, U ,ϵ ∼N (0 1) , ,t [ || ϵ -ϵ θ ( z , t, v, t U || ) 2 2 ] . Fed the generated image v and the original visual observation o back to MLLM and perform the same captioning task like Perceive Stage Infer the caption ˜ E of action recognition of consequent images between v and o Calculate the consistency between inferred instruction ˜ E and groundtruth instruction E : L ∑ l ( ˜ E E ) Update MLLM, diffusion model ϵ and corresponding projector R on the decoding side: L Total = L Reason + L Imagine + L Consistency Consistency = t =1 CELoss , end for ## Act: Training of Goal-conditioned Low-level Policy for each iteration do Sample the oracle action trajectories a , current observation o , stepwise instruction e and pair-wised subgoal images v from the D train as mini-batch B Predict the action ˆ a with the instruction e and corresponding v simultaneously Update low-level policy ψ on mini-batch B according the following loss: L Act = ∑ T t =1 || ˆ a t -p ψ ( a t | o , e t t , x t ) || end for ## I Limitation &amp; Future Work While PERIA demonstrates significant improvements in long-horizon manipulation tasks with complex instructions, there are still some limitations that need to be addressed in future work. - · The current implementation of PERIA relies on a pre-collected dataset for training the MLLM and diffusion model. Although this allows for effective learning of perception, reasoning, and imagination capabilities, it may limit the framework's adaptability to novel environments or tasks that deviate significantly from the training data. Future work could explore methods for online learning or adaptation to enable PERIA to generalize to new situations more effectively. - · The joint training of the MLLM and diffusion model can be computationally intensive and timeconsuming, particularly when generating high-quality images. While we have demonstrated the effectiveness of this approach, further research is needed to optimize the training process and improve its efficiency. This could involve the development of more lightweight architectures, advanced training techniques, or parallelization strategies. - · While PERIA has shown promising results in simulated environments, its performance in realworld scenarios remains to be explored. Real-world manipulation tasks may introduce additional challenges, such as noisy sensory inputs, dynamic environments, and physical constraints, which could affect the framework's performance. Future work should investigate the deployment of PERIA on physical robotic systems and assess its robustness and effectiveness in real-world settings. Despite these limitations, PERIA introduces a novel and promising paradigm towards enabling robots to perform complex manipulation tasks with general instructions. By addressing these challenges and continuing to refine the framework, we hope that PERIA can provide some insights to the robotics manipulation research towards long-horizon tasks with more complex instructions in free-form, paving the way for more intelligent and versatile robotic systems that can effectively operate in a wide range of environments and applications. ## J Social Impact By enabling robots to understand and follow more natural and diverse human instructions, PERIA can facilitate seamless human-robot collaboration in industries such as manufacturing, healthcare, and household assistance. This could lead to increased productivity, improved quality of life, and the creation of new job opportunities. For instance, an educational robot equipped with the PERIA framework could help children engage in constructive play activities, such as building block games or puzzles. The robot could provide step-by-step guidance and demonstrations, adapting to the child's skill level and learning pace. This interactive and personalized approach to learning could enhance children's cognitive development, problem-solving skills, and creativity. In conclusion, the advancements in long-horizon manipulation tasks presented in this work have the potential to advance the progress in the field of intelligent embodied robots, but responsible development and deployment practices must be adopted to ensure the safe, ethical, and beneficial integration of robots in educational or industrial settings. ## NeurIPS Paper Checklist ## 1. Claims Question: Do the main claims made in the abstract and introduction accurately reflect the paper's contributions and scope? Answer: [Yes] Justification: Please see the abstract's last four sentences and the introduction's last paragraph in Section 1. Guidelines: - · The answer NA means that the abstract and introduction do not include the claims made in the paper. - · The abstract and/or introduction should clearly state the claims made, including the contributions made in the paper and important assumptions and limitations. A No or NA answer to this question will not be perceived well by the reviewers. - · The claims made should match theoretical and experimental results, and reflect how much the results can be expected to generalize to other settings. - · It is fine to include aspirational goals as motivation as long as it is clear that these goals are not attained by the paper. ## 2. Limitations Question: Does the paper discuss the limitations of the work performed by the authors? Answer: [Yes] Justification: See the conclusion section in Section 5. ## Guidelines: - · The answer NA means that the paper has no limitation while the answer No means that the paper has limitations, but those are not discussed in the paper. - · The authors are encouraged to create a separate "Limitations" section in their paper. - · The paper should point out any strong assumptions and how robust the results are to violations of these assumptions (e.g., independence assumptions, noiseless settings, model well-specification, asymptotic approximations only holding locally). The authors should reflect on how these assumptions might be violated in practice and what the implications would be. - · The authors should reflect on the scope of the claims made, e.g., if the approach was only tested on a few datasets or with a few runs. In general, empirical results often depend on implicit assumptions, which should be articulated. - · The authors should reflect on the factors that influence the performance of the approach. For example, a facial recognition algorithm may perform poorly when image resolution is low or images are taken in low lighting. Or a speech-to-text system might not be used reliably to provide closed captions for online lectures because it fails to handle technical jargon. - · The authors should discuss the computational efficiency of the proposed algorithms and how they scale with dataset size. - · If applicable, the authors should discuss possible limitations of their approach to address problems of privacy and fairness. - · While the authors might fear that complete honesty about limitations might be used by reviewers as grounds for rejection, a worse outcome might be that reviewers discover limitations that aren't acknowledged in the paper. The authors should use their best judgment and recognize that individual actions in favor of transparency play an important role in developing norms that preserve the integrity of the community. Reviewers will be specifically instructed to not penalize honesty concerning limitations. ## 3. Theory Assumptions and Proofs Question: For each theoretical result, does the paper provide the full set of assumptions and a complete (and correct) proof? Answer: [NA] Justification: The paper does not include theoretical results. ## Guidelines: - · The answer NA means that the paper does not include theoretical results. - · All the theorems, formulas, and proofs in the paper should be numbered and crossreferenced. - · All assumptions should be clearly stated or referenced in the statement of any theorems. - · The proofs can either appear in the main paper or the supplemental material, but if they appear in the supplemental material, the authors are encouraged to provide a short proof sketch to provide intuition. - · Inversely, any informal proof provided in the core of the paper should be complemented by formal proofs provided in appendix or supplemental material. - · Theorems and Lemmas that the proof relies upon should be properly referenced. ## 4. Experimental Result Reproducibility Question: Does the paper fully disclose all the information needed to reproduce the main experimental results of the paper to the extent that it affects the main claims and/or conclusions of the paper (regardless of whether the code and data are provided or not)? Answer: [Yes] Justification: We have the code and model checkpoints ready for release. Besides, we provide sufficient implementation details for researchers to reproduce the results in Appendix D and ## Appendix E. ## Guidelines: - · The answer NA means that the paper does not include experiments. - · If the paper includes experiments, a No answer to this question will not be perceived well by the reviewers: Making the paper reproducible is important, regardless of whether the code and data are provided or not. - · If the contribution is a dataset and/or model, the authors should describe the steps taken to make their results reproducible or verifiable. - · Depending on the contribution, reproducibility can be accomplished in various ways. For example, if the contribution is a novel architecture, describing the architecture fully might suffice, or if the contribution is a specific model and empirical evaluation, it may be necessary to either make it possible for others to replicate the model with the same dataset, or provide access to the model. In general. releasing code and data is often one good way to accomplish this, but reproducibility can also be provided via detailed instructions for how to replicate the results, access to a hosted model (e.g., in the case of a large language model), releasing of a model checkpoint, or other means that are appropriate to the research performed. - · While NeurIPS does not require releasing code, the conference does require all submissions to provide some reasonable avenue for reproducibility, which may depend on the nature of the contribution. For example - (a) If the contribution is primarily a new algorithm, the paper should make it clear how to reproduce that algorithm. - (b) If the contribution is primarily a new model architecture, the paper should describe the architecture clearly and fully. - (c) If the contribution is a new model (e.g., a large language model), then there should either be a way to access this model for reproducing the results or a way to reproduce the model (e.g., with an open-source dataset or instructions for how to construct the dataset). - (d) We recognize that reproducibility may be tricky in some cases, in which case authors are welcome to describe the particular way they provide for reproducibility. In the case of closed-source models, it may be that access to the model is limited in some way (e.g., to registered users), but it should be possible for other researchers to have some path to reproducing or verifying the results. ## 5. Open access to data and code Question: Does the paper provide open access to the data and code, with sufficient instructions to faithfully reproduce the main experimental results, as described in supplemental material? Answer: [No] Justification: The code and model checkpoints is ready and will be released after the conference decision is made. ## Guidelines: - · The answer NA means that paper does not include experiments requiring code. - · Please see the NeurIPS code and data submission guidelines ( https://nips.cc/ public/guides/CodeSubmissionPolicy ) for more details. - · While we encourage the release of code and data, we understand that this might not be possible, so 'No' is an acceptable answer. Papers cannot be rejected simply for not including code, unless this is central to the contribution (e.g., for a new open-source benchmark). - · The instructions should contain the exact command and environment needed to run to reproduce the results. See the NeurIPS code and data submission guidelines ( https: //nips.cc/public/guides/CodeSubmissionPolicy ) for more details. - · The authors should provide instructions on data access and preparation, including how to access the raw data, preprocessed data, intermediate data, and generated data, etc. - · The authors should provide scripts to reproduce all experimental results for the new proposed method and baselines. If only a subset of experiments are reproducible, they should state which ones are omitted from the script and why. - · At submission time, to preserve anonymity, the authors should release anonymized versions (if applicable). - · Providing as much information as possible in supplemental material (appended to the paper) is recommended, but including URLs to data and code is permitted. ## 6. Experimental Setting/Details Question: Does the paper specify all the training and test details (e.g., data splits, hyperparameters, how they were chosen, type of optimizer, etc.) necessary to understand the results? Answer: [Yes] Justification: We provide full details of benchmarks in Appendix A, baselines in Appendix C, implementation details in Appendix D and Appendix E. ## Guidelines: - · The answer NA means that the paper does not include experiments. - · The experimental setting should be presented in the core of the paper to a level of detail that is necessary to appreciate the results and make sense of them. - · The full details can be provided either with the code, in appendix, or as supplemental material. ## 7. Experiment Statistical Significance Question: Does the paper report error bars suitably and correctly defined or other appropriate information about the statistical significance of the experiments? Answer: [Yes] Justification: We provided the mean and standard error over several random seeds in the experimental results to demonstrate statistical significance. ## Guidelines: - · The answer NA means that the paper does not include experiments. - · The authors should answer "Yes" if the results are accompanied by error bars, confidence intervals, or statistical significance tests, at least for the experiments that support the main claims of the paper. - · The factors of variability that the error bars are capturing should be clearly stated (for example, train/test split, initialization, random drawing of some parameter, or overall run with given experimental conditions). - · The method for calculating the error bars should be explained (closed form formula, call to a library function, bootstrap, etc.) - · The assumptions made should be given (e.g., Normally distributed errors). - · It should be clear whether the error bar is the standard deviation or the standard error of the mean. - · It is OK to report 1-sigma error bars, but one should state it. The authors should preferably report a 2-sigma error bar than state that they have a 96% CI, if the hypothesis of Normality of errors is not verified. - · For asymmetric distributions, the authors should be careful not to show in tables or figures symmetric error bars that would yield results that are out of range (e.g. negative error rates). - · If error bars are reported in tables or plots, The authors should explain in the text how they were calculated and reference the corresponding figures or tables in the text. ## 8. Experiments Compute Resources Question: For each experiment, does the paper provide sufficient information on the computer resources (type of compute workers, memory, time of execution) needed to reproduce the experiments? Answer: [Yes] Justification: We detailed the compute resources used for the experiments in Appendix D and Appendix E. Guidelines: - · The answer NA means that the paper does not include experiments. - · The paper should indicate the type of compute workers CPU or GPU, internal cluster, or cloud provider, including relevant memory and storage. - · The paper should provide the amount of compute required for each of the individual experimental runs as well as estimate the total compute. - · The paper should disclose whether the full research project required more compute than the experiments reported in the paper (e.g., preliminary or failed experiments that didn't make it into the paper). ## 9. Code Of Ethics Question: Does the research conducted in the paper conform, in every respect, with the NeurIPS Code of Ethics https://neurips.cc/public/EthicsGuidelines ? Answer: [Yes] Justification: Authors conducted in the paper conform, in every respect, with the NeurIPS Code of Ethics https://neurips.cc/public/EthicsGuidelines ## Guidelines: - · The answer NA means that the authors have not reviewed the NeurIPS Code of Ethics. - · If the authors answer No, they should explain the special circumstances that require a deviation from the Code of Ethics. - · The authors should make sure to preserve anonymity (e.g., if there is a special consideration due to laws or regulations in their jurisdiction). ## 10. Broader Impacts Question: Does the paper discuss both potential positive societal impacts and negative societal impacts of the work performed? Answer: [Yes] Justification: See the conclusion in Section 5 and the social impact in Appendix J. Guidelines: - · The answer NA means that there is no societal impact of the work performed. - · If the authors answer NA or No, they should explain why their work has no societal impact or why the paper does not address societal impact. - · Examples of negative societal impacts include potential malicious or unintended uses (e.g., disinformation, generating fake profiles, surveillance), fairness considerations (e.g., deployment of technologies that could make decisions that unfairly impact specific groups), privacy considerations, and security considerations. - · The conference expects that many papers will be foundational research and not tied to particular applications, let alone deployments. However, if there is a direct path to any negative applications, the authors should point it out. For example, it is legitimate to point out that an improvement in the quality of generative models could be used to generate deepfakes for disinformation. On the other hand, it is not needed to point out that a generic algorithm for optimizing neural networks could enable people to train models that generate Deepfakes faster. - · The authors should consider possible harms that could arise when the technology is being used as intended and functioning correctly, harms that could arise when the technology is being used as intended but gives incorrect results, and harms following from (intentional or unintentional) misuse of the technology. - · If there are negative societal impacts, the authors could also discuss possible mitigation strategies (e.g., gated release of models, providing defenses in addition to attacks, mechanisms for monitoring misuse, mechanisms to monitor how a system learns from feedback over time, improving the efficiency and accessibility of ML). ## 11. Safeguards Question: Does the paper describe safeguards that have been put in place for responsible release of data or models that have a high risk for misuse (e.g., pretrained language models, image generators, or scraped datasets)? Answer: [NA] Justification: The paper poses no such risks. Guidelines: - · The answer NA means that the paper poses no such risks. - · Released models that have a high risk for misuse or dual-use should be released with necessary safeguards to allow for controlled use of the model, for example by requiring that users adhere to usage guidelines or restrictions to access the model or implementing safety filters. - · Datasets that have been scraped from the Internet could pose safety risks. The authors should describe how they avoided releasing unsafe images. - · We recognize that providing effective safeguards is challenging, and many papers do not require this, but we encourage authors to take this into account and make a best faith effort. ## 12. Licenses for existing assets Question: Are the creators or original owners of assets (e.g., code, data, models), used in the paper, properly credited and are the license and terms of use explicitly mentioned and properly respected? Answer: [Yes] Justification: For all the datasets and algorithm baselines used in the paper, we have cited the original papers and provided the license, copyright information, and terms of use in the package in our code repository. ## Guidelines: - · The answer NA means that the paper does not use existing assets. - · The authors should cite the original paper that produced the code package or dataset. - · The authors should state which version of the asset is used and, if possible, include a URL. - · The name of the license (e.g., CC-BY 4.0) should be included for each asset. - · For scraped data from a particular source (e.g., website), the copyright and terms of service of that source should be provided. - · If assets are released, the license, copyright information, and terms of use in the package should be provided. For popular datasets, paperswithcode.com/datasets has curated licenses for some datasets. Their licensing guide can help determine the license of a dataset. - · For existing datasets that are re-packaged, both the original license and the license of the derived asset (if it has changed) should be provided. - · If this information is not available online, the authors are encouraged to reach out to the asset's creators. ## 13. New Assets Question: Are new assets introduced in the paper well documented and is the documentation provided alongside the assets? Answer: [NA] Justification: The paper does not release new assets. Guidelines: - · The answer NA means that the paper does not release new assets. - · Researchers should communicate the details of the dataset/code/model as part of their submissions via structured templates. This includes details about training, license, limitations, etc. - · The paper should discuss whether and how consent was obtained from people whose asset is used. - · At submission time, remember to anonymize your assets (if applicable). You can either create an anonymized URL or include an anonymized zip file. ## 14. Crowdsourcing and Research with Human Subjects Question: For crowdsourcing experiments and research with human subjects, does the paper include the full text of instructions given to participants and screenshots, if applicable, as well as details about compensation (if any)? Answer: [NA] Justification: The paper does not involve crowdsourcing or research with human subjects. Guidelines: - · The answer NA means that the paper does not involve crowdsourcing nor research with human subjects. - · Including this information in the supplemental material is fine, but if the main contribution of the paper involves human subjects, then as much detail as possible should be included in the main paper. - · According to the NeurIPS Code of Ethics, workers involved in data collection, curation, or other labor should be paid at least the minimum wage in the country of the data collector. ## 15. Institutional Review Board (IRB) Approvals or Equivalent for Research with Human Subjects Question: Does the paper describe potential risks incurred by study participants, whether such risks were disclosed to the subjects, and whether Institutional Review Board (IRB) approvals (or an equivalent approval/review based on the requirements of your country or institution) were obtained? Answer: [NA] Justification: The paper does not involve crowdsourcing or research with human subjects. Guidelines: - · The answer NA means that the paper does not involve crowdsourcing nor research with human subjects. - · Depending on the country in which research is conducted, IRB approval (or equivalent) may be required for any human subjects research. If you obtained IRB approval, you should clearly state this in the paper. - · We recognize that the procedures for this may vary significantly between institutions and locations, and we expect authors to adhere to the NeurIPS Code of Ethics and the guidelines for their institution. - · For initial submissions, do not include any information that would break anonymity (if applicable), such as the institution conducting the review.
zv9gYC3xgF
Toward Global Convergence of Gradient EM for Over-Paramterized Gaussian Mixture Models
We study the gradient Expectation-Maximization (EM) algorithm for Gaussian Mixture Models (GMM) in the over-parameterized setting, where a general GMM with $n>1$ components learns from data that are generated by a single ground truth Gaussian distribution. While results for the special case of 2-Gaussian mixtures are well-known, a general global convergence analysis for arbitrary $n$ remains unresolved and faces several new technical barriers since the convergence becomes sub-linear and non-monotonic. To address these challenges, we construct a novel likelihood-based convergence analysis framework and rigorously prove that gradient EM converges globally with a sublinear rate $O(1/\sqrt{t})$. This is the first global convergence result for Gaussian mixtures with more than $2$ components. The sublinear convergence rate is due to the algorithmic nature of learning over-parameterized GMM with gradient EM. We also identify a new emerging technical challenge for learning general over-parameterized GMM: the existence of bad local regions that can trap gradient EM for an exponential number of steps.
https://openreview.net/pdf/01089f1b9d7a3757d7fe8abda681870c3db968be.pdf
[ { "confidence": 3, "rating": 6, "review_id": "6LkxBEXYgp", "review_text": "The paper studies the convergence of EM for learning mixtures of Gaussians. Specifically, they consider a simplified setting where the Gaussians are in $d$-dimensions and all have covariance $I_d$. They consider an overparameterized version of the problem where they parametrize the mixture they are trying to learn by a mixture of $n$ Gaussians with means $\\mu_1, \\dots , \\mu_n$ and the ground truth distribution generating the data just consists of a single Gaussian $N(\\mu^* , I_d)$. The paper analyzes the dynamics of gradient EM for this problem. The main result of the paper is proving that for this overparametrized variant, gradient EM converges to the true distribution at a rate of $1/\\sqrt{t}$ with additional constants depending exponentially on the distance between the initialized means and the true mean, which they show is necessary.\n\nThere has been a long line of work on understanding the convergence of EM or gradient EM for learning mixtures of Gaussians. Without overparametrization, provable convergence is known for mixtures of two Gaussians and it is also known that convergence fails in general for mixtures of three or more components. For overparamterized settings, a previous work [Dwivedi et. al. 2018] shows that if we parametrize a mixture of two Gaussians and try to learn a ground truth distribution consisting of a single Gaussian, then EM converges at a $1/\\sqrt{t}$ rate (as long as the mixing weights are set to be different). This is in contrast to when we parametrize with only a single Gaussian and EM converges exponentially fast. The results of the current paper can be seen as generalizing the results of [Dwivedi et. al. 2018] to more than two components. The paper empirically validates their theoretical results with experiments on simple synthetic datasets.\n\nThe paper makes progress on a well-studied problem of understanding convergence of EM for learning GMMs. They give the first global convergence results for mixtures with more than two components.\n\nThe paper overcomes nontrivial technical barriers to extend previous results to more than two components.\n\nThe results of the paper only work when the ground truth is \"trivial\" i.e. a single Gaussian.\n\nThe results are qualitatively similar to previous work on overparametrized mixtures of two Gaussians. The contributions of the paper are mostly technical and it is a bit difficult to find a nice conceptual takeaway \\--- the previous work for two components already showed that overparametrization can lead to drastically slower convergence. It would be much more exciting and novel, say, if we could prove something when the ground truth were not just a single Gaussian.\n\n." }, { "confidence": 3, "rating": 6, "review_id": "Q83DuxxS9R", "review_text": "This paper talks about the gradient-EM algorithm for over-parameterized GMM. The paper mostly shows the GLOBAL convergence and its rate when using this model to learn a single Gaussian.\n\nI believe any non-convex global convergence optimization problem is valuable. It is an extension of Dwivedi et al. 2019.\n\n1. The over-parametrized model may have severe overfitting problem. \n2. The based distribution is quite easy: a single normal, with known variance. In the paper, the covariance is fixed as the identity, which simplifies the problem in a deep way. Actually for symmetric 2-GMM, there are already faster algorithms to learn both mean and cov. \n3. I feel confused about the consistency and convergence in the paper. In Line 96, the convergence of KL divergence also contains the convergence of MLE, ie consistency. The convergence to the MLE is another loss function. Also in Remark 6, the convergence when sample size to infinity seems more easily ensured by WLLN.\n\nBesides the weakness above, I also have following questions:\n4. If you only learn the single normal, how is the algorithm compared with Dwivedi et al. 2019 or just 2-GMM? Is it necessary to use more? Is it overfitting so the performance seems better?\n5. I don’t get why the paper introduces Fact 1. It seems obvious. \n6. The mean is convergent to 0 (true) instead of the MLE." }, { "confidence": 2, "rating": 6, "review_id": "am4YAV6doi", "review_text": "The paper focuses on the setting of a Gaussian Mixture Model with several summands and an input vector produced by one Gaussian distribution, where it employs the Expectation-Maximization rule to infer the model's parameters. Since the problem of having arbitrary number of summands has been unsolved, the paper provides an innovative scheme which includes the computation of the likelihood function and shows that the EM algorithm converges with sublinear complexity. \n\nThe authors also show that there exist neighborhoods of slow convergence rates.\n\n- The paper is well written, the theorems, lemmata and algorithmic steps are described gradually.\n- From a first overview of the literature, the result about global convergence seems novel. \n- Across section 4, there is intuition and remarks provided about the necessity of the steps.\n\n- The experimental evaluation is used as a proof of concept and thus is limited. The authors could have (potentially) experimented with several datasets, with varying weights in the GMM, and try to benchmark their algorithm to compare the emergent convergence rates.\n\nNA." }, { "confidence": 4, "rating": 6, "review_id": "pTgLGsoIvx", "review_text": "The paper considers fitting a single Gaussian with multiple-component Gaussian mixture models (GMM) through the Gradient EM algorithm. While the two balanced over-specified Gaussian setting has been widely studied in the previous work, generalizing it to multiple-component GMM requires significant algebraic efforts. The entirety of the paper is to show the $1/\\sqrt{t}$ convergence rate of the population EM algorithm. In particular, the paper characterizes the explicit convergence rate of $1/\\sqrt{T}$ with constants exponential in the number of components, the phenomenon that coincides with the exponential lower bound for the parameter estimation of general GMMs with no separation.\n\n-\tExtending some existing two-component results to general multiple-component GMM is non-trivial and significant. The paper nicely characterizes the convergence rate that captures some important properties of learning GMM that can be achieved by GMM. \n\n-\tThe paper is well-written, emphasizing important aspects of the results and well-contrasting their techniques to existing results. \n\n-\tProof sketch is nicely written to help readers understand their key results.\n\n-\tWhile the lower bound result (Theorem 7) is a nice addition to the literature, I believe that the gap between this lower bound and the upper bound is large, since the upper bound is exponentially slow in the number of components. \n\n-\tOne important result from two specified GMM is the $n^{-1/4}$ (n is the number of samples here) statistical rate after convergence. I would like to see $n^{-1/2k}$ style results in general k-component GMM settings. At least, the authors should have discussed this aspect of previous work and contrasted the implications to k-GMM settings. \n\n-\tThe experiment would have been nicer if the final statistical rates were compared.\n\n-\tMaybe authors can elaborate on how their results can imply learning k-GMM with small separations?\n\n-\tIn Theorem 7, there is no restriction on the step size $\\eta$. I believe that the lower bound should also be able to tell that $\\eta$ cannot be set too large.\n\n-\tWhy only on the gradient EM? Can the analysis in the paper imply some convergence rates of the standard EM algorithm as well? I think it would make the paper much stronger if it could show that the same results hold for standard EM." } ]
## Toward Global Convergence of Gradient EM for Over-Parameterized Gaussian Mixture Models Weihang Xu University of Washington [email protected] Maryam Fazel University of Washington [email protected] Simon S. Du University of Washington [email protected] ## Abstract We study the gradient Expectation-Maximization (EM) algorithm for Gaussian Mixture Models (GMM) in the over-parameterized setting, where a general GMM with n &gt; 1 components learns from data that are generated by a single ground truth Gaussian distribution. While results for the special case of 2-Gaussian mixtures are well-known, a general global convergence analysis for arbitrary n remains unresolved and faces several new technical barriers since the convergence becomes sub-linear and non-monotonic. To address these challenges, we construct a novel likelihood-based convergence analysis framework and rigorously prove that gradient EM converges globally with a sublinear rate O (1 / √ t ) . This is the first global convergence result for Gaussian mixtures with more than 2 components. The sublinear convergence rate is due to the algorithmic nature of learning overparameterized GMM with gradient EM. We also identify a new emerging technical challenge for learning general over-parameterized GMM: the existence of bad local regions that can trap gradient EM for an exponential number of steps. ## 1 Introduction Learning Gaussian Mixture Models (GMM) is a fundamental problem in machine learning with broad applications. In this problem, data generated from a mixture of n ≥ 2 ground truth Gaussians are observed without the label (the index of component Gaussian that data is sampled from), and the goal is to retrieve the maximum likelihood estimation of Gaussian components. The Expectation Maximization (EM) algorithm is arguably the most widely-used algorithm for this problem. Each iteration of the EM algorithm consists of two steps. In the expectation (E) step, it computes the posterior probability of unobserved mixture membership label according to the current parameterized model. In the maximization (M) step, it computes the maximizer of the Q function, which is the likelihood with respect to posterior estimation of the hidden label computed in the E step. Gradient EM, as a popular variant of EM, is often used in practice when the maximization step of EM is costly or even intractable. It replaces the M step of EM with taking one gradient step on the Q function. Learning Gaussian Mixture Models with EM/gradient EM is an important and widely-studied problem. Starting from the seminal work [Balakrishnan et al., 2014], a flurry of work Daskalakis et al. [2017], Xu et al. [2016], Dwivedi et al. [2018a], Kwon and Caramanis [2020], Dwivedi et al. [2019] have studied the convergence guarantee for EM/gradient EM in various settings. However, these works either only prove local convergence, or consider the special case of 2 -Gaussian mixtures. A general global convergence analysis of EM/gradient EM on n -Gaussian mixtures still remains unresolved. Jin et al. [2016] is a notable negative result in this regard, where the authors show that on GMM with n ≥ 3 components, randomly initialized EM will get trapped in a spurious local minimum with high probability. Over-parameterized Gaussian Mixture Models. Motivated by the negative results, a line of work considers the over-parameterized setting where the model uses more Gaussian components than the ground truth GMM, in the hope that it might help the global convergence of EM and bypass the negative result. In such over-parameterized regime, the best that people know so far is from [Dwivedi et al., 2018b]. This work proves global convergence of 2-Gaussian mixtures on one single Gaussian ground truth. The authors also show that EM has a unique sub-linear convergence rate in this over-parameterized setting (compared with the linear convergence rate in the exact-parameterized setting [Balakrishnan et al., 2014]). This motivates the following natural open question: Can we prove global convergence of the EM/gradient EM algorithm on general n -Gaussian mixtures in the over-parameterized regime? In this paper, we take a significant step towards answering this question. Our main contributions can be summarized as follows: - · Weprove global convergence of the gradient EM algorithm for learning general n -component GMMononesingle ground truth Gaussian distribution. This is, to the best of our knowledge, the first global convergence proof for general n -component GMM. Our convergence rate is sub-linear, reflecting an inherent nature of over-parameterized GMM (see Remark 3 for details). - · We propose a new analysis framework that utilizes the likelihood function for proving convergence of gradient EM. Our new framework tackles several emerging technical barriers for global analysis of general GMM. - · We also identify a new geometric property of gradient EM for learning general n -component GMM: There exists bad initialization regions that traps gradient EM for exponentially long, resulting in an inevitable exponential factor in the convergence rate of gradient EM. ## 1.1 Gaussian Mixture Model (GMM) We consider the canonical Gaussian Mixture Models with weights π = ( π , . . . , π 1 n ) ( ∑ n i =1 π i = 1 ), means µ = ( µ ⊤ 1 , . . . , µ ⊤ ⊤ n ) and unit covariance matrices I d in d -dimensional space. Following a widely-studied setting [Balakrishnan et al., 2014, Yan et al., 2017, Daskalakis et al., 2017], we set the weights π and covariances I d in student GMM as fixed, and the means µ = ( µ ⊤ 1 , . . . , µ ⊤ ⊤ n ) as trainable parameters. We use GMM ( µ ) to denote the GMM model parameterized by µ , which can be described with probability density function (PDF) p µ : R d → R ≥ 0 as <!-- formula-not-decoded --> where ϕ ( ·| µ, Σ) is the PDF of N ( µ, Σ) , π 1 + · · · + π n = 1 , π i &gt; , 0 ∀ i ∈ [ n ] . ## 1.2 Gradient EM algorithm The EM algorithm is one of the most popular algorithms for retrieving the maximum likelihood estimator (MLE) on latent variable models. In general, EM and gradient EM address the following problem: given a joint distribution p µ ∗ ( x, y ) of random variables x, y parameterized by µ ∗ , observing only the distribution of x , but not the latent variable y , the goal of EM and gradient EM is to retrieve the maximum likelihood estimator <!-- formula-not-decoded --> The focus of this paper is the non-convex optimization analysis, so we consider using population gradient EM algorithm to learn GMM (1), where the observed variable is x ∈ R d and latent variable is the index of membership Gaussian in GMM. We follow the standard teacher-student setting where a student model GMM ( µ ) with n ≥ 2 Gaussian components learns from data generated from a ground truth teacher model GMM ( µ ∗ ) . We consider the over-parameterized setting where the ground truth model GMM ( µ ∗ ) is a single Gaussian distribution N ( µ , I ∗ d ) , namely µ ∗ = ( µ ∗⊤ , . . . , µ ∗⊤ ⊤ ) . We can then further assume w.l.o.g. that µ ∗ = 0 . Our problem could be seen as a strict generalization of Dwivedi et al. [2018b], where they studied using mixture model of two Gaussians with symmetric means (they set constraint µ 2 = -µ 1 ) to learn one single Gaussian. At time step t = 0 1 2 , , , . . . , given with parameters µ ( ) t = ( µ 1 ( ) t ⊤ , . . . , µ n ( ) t ⊤ ⊤ ) , population gradient EM updates µ via the following two steps - · E step: for each i ∈ [ n ] , compute the membership weight function ψ i : R d → R defined as <!-- formula-not-decoded --> - · Mstep: Define Q ( ·| , µ ( )) t as <!-- formula-not-decoded --> Gradient EM with step size η &gt; 0 performs the following update: µ i ( t +1) = µ i ( ) t - ∇ η µ i Q ( µ ( ) t | µ ( )) = t µ i ( ) t -η E x ∼N (0 ,I d ) [ ψ i ( x | µ ( ))( t µ i ( ) t -x )] . (3) The membership weight function x → ψ i ( x | µ ) represents the posterior probability of data point x being sampled from the i th Gaussian of GMM ( µ ) . For ease of notation, we sometimes simply write ψ i ( x | µ ) as ψ i ( x ) when the choice of µ is obvious. ## 1.3 Loss function of gradient EM Since the task of gradient EM is to find the MLE over ground truth distribution p µ ∗ , we can define the MLE loss function for gradient EM as <!-- formula-not-decoded --> The loss L is the Kullback-Leibler (KL) divergence between the ground truth GMM and the student model GMM. Since finding MLE is equivalent to minimizing the KL divergence between model and the ground truth, the goal of gradient EM is equivalent to finding the global minimum of loss L . In other words, proving that gradient EM finds the MLE is equivalent with proving the convergence of L to 0 . However, we are going to present another reason why loss function L is important, for it is also closely related to the dynamics of gradient EM. Gradient EM is gradient descent on L . We present the following important observation. The proof is deferred to appendix. <!-- formula-not-decoded --> Fact 1 states that the gradient of Q function that gradient EM optimizes in each iteration is identical to the gradient of loss function L . This observation is very useful since it implies that gradient EM is equivalent to gradient descent (GD) algorithm on L . This observation is not a new discovery of ours but actually a wide-spread folklore (see [Jin et al., 2016]). However, our new contribution is to observe Fact 1 is very helpful for analyzing gradient EM, and to construct a new convergence analysis framework for gradient EM based on it. ## 1.4 Notation ̸ In this paper, we adopt the following notational conventions. We denote { 1 2 , , . . . , n } with [ n ] . µ = ( µ ⊤ 1 , . . . , µ ⊤ ⊤ n ) ∈ R nd denotes the parameter vector of GMM obtained by concatenating Gaussian mean vectors µ , . . . , µ 1 n together. For any vector µ µ t , ( ) denotes its value at time step t , sometimes we omit this iteration number t when its choice is clear and simply abbreviate µ t ( ) as µ . We define a shorthand of expectation taken over the ground truth GMM E x ∼N (0 ,I d ) [ ] · as E x [ ] · . For any vector v = 0 , we use v := v/ v ∥ ∥ to denote the normalization of v . We define (with a slight abuse of notation) i max := arg max i ∈ [ n ] {∥ µ i ∥} as the index of µ i with the maximum norm, and µ max := ∥ µ i max ∥ = max i ∈ [ n ] {∥ µ i ∥} as the maximum norm of µ i . In particular, µ max ( ) = t max {∥ µ 1 ( ) t ∥ , . . . , ∥ µ n ( ) t ∥} . Similarly, π min := min i ∈ [ n ] π i and π max := max i ∈ [ n ] π i denotes the minimal and maximal π i , respectively. We use ∇ µ i L to denote the gradient of µ i on L , and ∇L = ( ∇ µ 1 L ⊤ , . . . , ∇ µ n L ) ⊤ denotes the collection of all gradients. Finally we define a potential function U : R nd → R for GMM ( µ ) as <!-- formula-not-decoded --> ## 1.5 Technical overview Here we provide a brief summary of the major technical barriers for our global convergence analysis and our techniques for overcoming them. New likelihood-based analysis framework. The traditional convergence analysis for EM/gradient EM in previous works Balakrishnan et al. [2014], Yan et al. [2017], Kwon and Caramanis [2020] proceeds by showing the distance between the model and the ground truth GMM in the parameter space contracts linearly in every iteration. This type of approach meets new challenges in the over-parameterized n -Gaussian mixture setting since the convergence is both sub-linear and nonmonotonic. To address these problems, we propose a new likelihood-based convergence analysis framework: instead of proving the convergence of parameters, our analysis proceeds by showing the likelihood loss function L converges to 0 . The new analysis framework is more flexible and allows us to overcome the aforementioned technical barriers. Gradient lower bound. The first step of our global convergence analysis constructs a gradient lower bound. Using some algebraic transformation techniques, we convert the gradient projection ⟨L ( µ ) , µ ⟩ into the expected norm square of a random vector ˜ ( ψ x ) . (See Section (4) for the full definition). Although lower bounding the expectation of ˜ ψ is very challenging, our key idea is that the gradient of ˜ ψ has very nice properties and can be easily lower bounded, allowing us to establish the gradient lower bound. Local smoothness and regularity condition. After obtaining the gradient lower bound, the missing component of the proof is a smoothness condition of the loss function L . Since proving the smoothness of L is hard in general, we define and prove a weaker notion of local smoothness, which suffices to prove our result. In addition, we design and use an auxiliary function U to show that gradient EM trajectory satisfies the locality required by our smoothness lemma. ## 2 Related work ## 2.1 2-Gaussian mixtures There is a vast literature studying the convergence of EM/gradient EM on 2 -component GMM. The initial batch of results proves convergence within a infinitesimally small local region [Xu and Jordan, 1996, Ma et al., 2000]. Balakrishnan et al. [2014] proves for the first time convergence of EM and gradient EM within a non-infinitesimal local region. Among the later works on the same problem, Klusowski and Brinda [2016] improves the basin of convergence guarantee, Daskalakis et al. [2017], Xu et al. [2016] proves the global convergence for 2 -Gaussian mixtures. These works focused on the exact-parameterization scenario where the number of student mixtures is the same as that of the ground truth. More recently, Wu and Zhou [2019] proves global convergence of 2 -component GMMwithout any separation condition. Their result can be viewed as a convergence result in the over-parameterized setting where the student model has two Gaussians and the ground truth is a single Gaussian. On the other hand, their setting is more restricted than ours because they require the means of two Gaussians in the student model to be symmetric around the ground truth mean. Weinberger and Bresler [2021] extends the convergence guarantee to the case of unbalanced weights. Another line of work Dwivedi et al. [2018b, 2019, 2018a] studies the over-parameterized setting of using 2 -Gaussian mixture to learn a single Gaussian and proves global convergence of EM. Our result extends this type of analysis to the general case of n -Gaussian mixtures, which requires significantly different techniques. We note that going beyond Gaussian mixture models, there are also works studying EM algorithms for other mixture models such as a mixture of linear regression Kwon et al. [2019]. ## 2.2 N-Gaussian mixtures Another line of results focuses on the general case of n Gaussian mixtures. Jin et al. [2016] provides a counter-example showing that EM does not converge globally for n &gt; 2 (in the exact-parameterized case). Dasgupta and Schulman [2000] prove that a variant of EM converges to MLE in two rounds for n -GMM. Their result relies on a modification of the EM algorithm and is not comparable with ours. [Chen et al., 2023] analyzes the structure of local minima in the likelihood function of GMM. However, their result is purely geometric and does not provide any convergence guarantee. Aseries of paper Yan et al. [2017], Zhao et al. [2018], Kwon and Caramanis [2020], Segol and Nadler follow the framework proposed by Balakrishnan et al. [2014] to prove the local convergence of EM for n -GMM. While their result applies to the more general n -Gaussian mixture ground truth setting, their framework only provides local convergence guarantee and cannot be directly applied to our setting. ## 2.3 Slowdown due to over-parameterization √ This paper gives an O ( 1 / t ) bound for fitting over-parameterized Gaussian mixture models to a single Gaussian. Recall that to learn a single Gaussian, if one's student model is also a single Gaussian, then one can obtain an exp( -Ω( )) t rate because the loss is strongly convex. This slowdown effect due to over-parameterization has been observed for Gaussian mixtures in Dwivedi et al. [2018a], Wu and Zhou [2019], but has also been observed in other learning problems, such as learning a two-layer neural network Xu and Du [2023], Richert et al. [2022] and matrix sensing problems [Xiong et al., 2023, Zhang et al., 2021, Zhuo et al., 2021]. ## 3 Main results In this section, we present our main theoretical result, which consists of two parts: In Section 3.1 we present our global convergence analysis of gradient EM, in Section 3.2 we prove that an exponentially small factor in our convergence bound is inevitable and cannot be removed. All omitted proofs are deferred to the appendix. ## 3.1 Global convergence of gradient EM We first present our main result, which states that gradient EM converges to MLE globally. Theorem 2 (Main result) . Consider training a student n -component GMM initialized from µ (0) = ( µ 1 (0) ⊤ , . . . , µ n (0) ⊤ ⊤ ) to learn a single-component ground truth GMM N (0 , I d ) with population gradient EM algorithm. If the step size satisfies η ≤ O ( exp( -8 U (0)) π 2 min n d 2 2 ( 1 µ max(0) + µ max (0)) 2 ) , then gradient EM converges globally with rate <!-- formula-not-decoded --> where γ = Ω ( η exp( -16 U (0)) π 4 min n d 2 2 (1+ µ max (0) √ dn ) 4 ) ∈ R + . Recall that µ max (0) = max {∥ µ 1 (0) ∥ , . . . , ∥ µ n (0) ∥} and U (0) = ∑ i ∈ [ n ] ∥ µ i (0) ∥ 2 are two initialization constants. Remark 3. Without over-parameterization, for learning a single Gaussian, one can obtain a linear convergence exp( -Ω( )) t . We would like to note that the sub-linear convergence rate guarantee of gradient EM stated in Theorem 2 ( L ( µ ( )) t ≤ O (1 / √ t ) ) is due to the inherent nature of the algorithm. Dwivedi et al. [2018b] studied the special case of using 2 Gaussian mixtures with symmetric means to learn a single Gaussian and proved that EM has sublinear convergence rate when the weights π i are equal. Since Theorem 2 studies the more general case of n Gaussian mixtures, this type of subexponential convergence rate is the best than we can hope for. Remark 4. The convergence rate in Theorem 2 has a factor exponentially small in the initialization scale ( γ ∝ exp( -16 U (0)) ). We would like to stress that this is again due to algorithmic nature of the problem rather than the limitation of analysis. In Section 3.2, we prove that there exists bad regions with exponentially small gradients so that when initialized from such region, gradient EM gets trapped locally for exp(Ω( U (0))) number of steps. Therefore, a convergence speed guarantee exponentially small in U (0) is inevitable and cannot be improved. Remark 5. Theorem 2 is fundamentally different from convergence analysis for EM/gradient EM in previous works Yan et al. [2017], Dwivedi et al. [2019], Balakrishnan et al. [2014] which proved monotonic linear contraction of parameter distance ∥ µ ( ) t -µ ∗ ∥ . But our result also implies global convergence since loss function L converging to 0 is equivalent to convergence of gradient EM to MLE. Remark 6. The convergence result in Theorem 2 is for population gradient EM, but it also implies global convergence for sample-based gradient EM as the sample size tends to infinity. For a similar reduction from population EM to sample EM, see Section 2.2 of [Xu et al., 2016]. ## 3.2 Necessity of exponentially small factor in convergence rate In this section we prove that a factor exponentially small in initialization scale ( exp( -Θ( U (0))) ) is inevitable in the global convergence rate guarantee of gradient EM. Particularly, we show the existence of bad regions such that initialization from this region traps gradient EM for exponentially long time before final convergence. Our result is the following theorem. Theorem 7 (Existence of bad initialization region) . For any n ≥ 3 , define ˜(0) µ = ( µ ⊤ 1 (0) , . . . , µ ⊤ n (0)) as follows: µ 1 (0) = 12 √ de , µ 1 2 (0) = -12 √ de , µ 1 3 (0) = · · · = µ n (0) = 0 , where e 1 is a standard unit vector. Then population gradient EM initialized with means ˜(0) µ and equal weights π 1 = . . . = π n = 1 /n will be trapped in a bad local region around ˜(0) µ for exponentially long time <!-- formula-not-decoded --> More rigorously, for any 0 ≤ t ≤ T, ∃ i ∈ [ n ] such that <!-- formula-not-decoded --> Theorem 7 states that, when initialized from some bad points µ (0) , after exp(Θ( U (0))) number of time steps, gradient EM will still stay in this local region and remain 10 √ d distance away from the global minimum µ = 0 . Therefore an exponentially small factor in convergence rate is inevitable. Remark 8. Theorem 7 eliminates the possibility of proving any polynomial convergence rate of gradient EM from arbitrary initialization. However, it is still possible to prove that, with some specific smart initialization schemes, gradient EM avoids the bad regions stated in Theorem 7 and enjoys a polynomial convergence rate. We leave this as an interesting open question for future analysis. ## 4 Proof overview In this section, we provide a technical overview of the proof in our main result (Theorem 2 and Theorem 7). ## 4.1 Difficulties of a global convergence proof and our new analysis framework Proving the global convergence of gradient EM for general n -Gaussian mixture is highly nontrivial. While there have been many previous works [Balakrishnan et al., 2014, Yan et al., 2017, Dwivedi et al., 2018b] studying either local convergence or the special case of 2 -Gaussian mixtures, they all focus on showing the contraction of parametric error. Namely, their proof proceeds by showing the distance between the model parameter and the ground truth contracts, usually by a fixed linear ratio, in each iteration of the algorithm. However, this kind of approach faces various challenges for our general problem where the convergence is both sublinear and non-monotonic . Since the convergence rate is sublinear (see Remark 3), showing a linear contraction per iteration is no longer possible. Since the convergence is non-monotonic 1 , we also cannot show a strictly decreasing parametric distance. To address these challenges, we propose a new convergence analysis framework for gradient EM by proving the convergence of likelihood L instead of the convergence of parameters µ . There are several benefits for considering the convergence from the perspective of MLE loss L . Firstly, it naturally addresses the problem of non-monotonic and sub-linear convergence since we only need to show L decreases as the algorithm updates. Also, since gradient EM is equivalent with running gradient descent on loss function L (see Section 1.3), we can apply techniques from the optimization theory of gradient descent to facilitate our analysis. ## 4.2 Proof ideas for Theorem 2 We first briefly outline our proof of Theorem 2. Proof roadmap. Our proof of Theorem 2 consists of three steps. Firstly, we prove a gradient lower bound for L (Theorem 12). Then we prove that the MLE L is locally smooth (Theorem 13). Finally, 1 To see this, consider n = 2 , µ 1 = 0 , µ 2 = (1 0 , , . . . , 0) ⊤ , then the norm of µ 1 strictly increases after one iteration. we combine the gradient lower bound and the smoothness condition to prove the global convergence of L with mathematical induction. ## Step 1: Gradient lower bound. Our first step aims to show that the gradient norm of L ( µ ) is lower bounded by the distance of µ to the ground truth. To do this, we need a few preliminary results. Inspired by Chen et al. [2023], we use Stein's identity [Stein, 1981] to perform an algebraic transformation of the gradient. Recalling the definition of ψ i in (2), we have the following lemma. Lemma 9. For any GMM ( µ ) , i ∈ [ n ] , the gradient of Q satisfies <!-- formula-not-decoded --> The gradient expression above is equivalent with the form in (3), but is easier to manipulate. Using the transformed gradient in Lemma 9, we have the following corollary. Corollary 10. Define vector ˜ ψ µ ( x ) := ∑ i ∈ [ n ] ψ i ( x µ ) i . For any GMM ( µ ) , the projection of the gradient of ∇L ( µ ) onto µ satisfies <!-- formula-not-decoded --> Corollary 9 is important since it converts the projection of gradient ∇L ( µ ) onto µ to the expected norm square of a vector ˜ ψ µ . Since a lower bound of the gradient projection implies a lower bound of the gradient, we only need to construct a lower bound for ⟨∇L ( µ ) , µ ⟩ = E x [ ∥ ∥ ∥ ˜ ψ µ ( x ) ∥ ] ∥ ∥ 2 . Since ∥ ∥ ∥ ˜ ψ µ ( x ) ∥ ∥ ∥ 2 is always non-negative, we already know that the gradient projection is non-negative. But lower bounding E x [ ∥ ∥ ∥ ˜ ψ µ ( x ) ∥ ] ∥ ∥ 2 is still highly nontrivial since the expression of ˜ ψ is complicated and hard to handle. However, our key observation is that, although ˜ ψ itself is hard to bound, its gradient has nice properties and can be handled gracefully : <!-- formula-not-decoded --> The gradient (5) is nicely-behaved. One can see immediately from (5) that the matrix ∇ x ˜ ψ µ ( x ) is positive-semi-definite, and its eigenvalues can be directly bounded. To utilize these properties, we use the following algebraic trick to convert the task of lower bounding ˜ ψ itself into the task of lower bounding its gradient. <!-- formula-not-decoded --> Recall that ¯ = x x ∥ x ∥ . See detailed derivation in (23). Using (5), combined with the properties of ∇ x ˜ ψ µ ( x ) , we can obtain the following lemma (Recall that U = ∑ i ∈ [ n ] ∥ µ i ∥ 2 .): Lemma 11. For any GMM ( µ ) we have <!-- formula-not-decoded --> On top of Lemma 11, we can easily lower bound the gradient projection in the following lemma, finishing the first step of our proof. Lemma 12 (Gradient projection lower bound) . For any GMM ( µ ) we have <!-- formula-not-decoded --> ## Step 2: Local smoothness. To construct a global convergence analysis for gradient-based methods, after obtaining a gradient lower bound, we still need to prove the smoothness of loss L . (Recall that global smoothness of function f means that there exists constant C such that ∥∇ f ( x 1 ) -∇ f ( x 2 ) ∥ ≤ C x ∥ 1 -x 2 ∥ ∀ , x , x 1 2 .) However, proving the smoothness for L in general is very challenging since the membership function ψ i cannot be bounded when µ is unbounded. To address this issue, we prove that L is locally smooth, i.e. , the smoothness between two points µ and µ ′ is satisfied if both ∥ µ ∥ and ∥ µ -µ ′ ∥ are upper bounded. Our result is the following theorem. Theorem 13 (Local smoothness of loss function) . At any two points µ = ( µ ⊤ 1 , . . . , µ ⊤ ⊤ n ) and µ + δ = (( µ 1 + δ 1 ) ⊤ , . . . , ( µ n + δ n ) ⊤ ⊤ ) , if <!-- formula-not-decoded --> then the loss function L satisfies the following smoothness property: for any i ∈ [ n ] we have <!-- formula-not-decoded --> ## Step 3: putting everything together. Given the gradient lower bound and the smoothness condition, we still need to resolve two remaining problems. The first one is that the gradient lower bound in Lemma 12 is given in terms of µ , which we need to convert to a lower bound in terms of L ( µ ) . For this we need the following upper bound of L . Theorem 14 (Loss function upper bound) . The loss function can be upper bounded as <!-- formula-not-decoded --> The second problem is that our local smoothness theorem requires µ to be bounded, therefore we need to show a regularity condition that for each i , µ i ( ) t stays in a bounded region during gradient EMupdates. This is not easy to prove for each individual µ i due to the same non-monotonic issue mentioned in Section 4.1. To establish such a regularity condition, we use the potential function. U to solve this problem. We prove that U remains bounded along the gradient EM trajectory, implying each µ i remains well-behaved. With this regularity condition, combined with the previous two steps, we finish the proof of Theorem 2 via mathematical induction. ## 4.3 Proof ideas for Theorem 7 Proving Theorem 7 is much simpler. The idea is natural: we found that there exists some bad regions where the gradient of L is exponentially small, characterized by the following lemma. Lemma 15 (Gradient norm upper bound) . For any µ satisfying ∥ µ 1 ∥ ∥ , µ 2 ∥ ≥ 10 √ d, ∥ µ 3 ∥ , . . . , ∥ µ n ∥ ≤ √ d , the gradient of L at µ can be upper bounded as <!-- formula-not-decoded --> Utilizing Lemma 15, we can prove Theorem 7 by showing that initialization from these bad regions will get trapped in it for exponentially long, since the gradient norm is exponentially small. The full proof can be found in Appendix B.2. <!-- image --> Figure 1: Left: Sublinear convergence of the likelihood loss L . Middle: Sublinear convergence of the parametric distance ∑ i ∈ [ n ] π i ∥ µ i -µ ∗ ∥ 2 between student GMM and the ground truth. Right: Impact of different mixing weights on the convergence speed. <!-- image --> <!-- image --> Figure 2: Left: Gradient norm ∥∇L ( µ (0)) ∥ in the counter-example in Theorem 7 decreases exponentially fast w.r.t. dimension d . Right: The statistical error (blue line) approximately scales as ∼ n -1 / 4 with sample size n . <!-- image --> ## 5 Experiments In this section we experimentally explore the behavior of gradient EM on GMMs. Convergence rates. Wechoose the experimental setting of d = 5 , η = 0 7 . . Weuse n = 2 5 10 , , Gaussian mixtures to learn data generated from one single ground truth Gaussian distribution N ( µ , I ∗ d ) , respectively. Since a closed form expression of the population gradient is intractable, we approximate the gradient step via Monte Carlo method, with sample size 3 5 . × 10 5 . The mixing weights of student GMMare randomly sampled from a standard Dirichlet distribution and set as fixed during gradient EM update. The covariances of all component Gaussians are set as the identity matrix. We recorded the convergence of likelihood function L (estimated also by Monte Carlo method on fresh samples each iteration) and parametric distance ∑ i ∈ [ n ] π i ∥ µ i -µ ∗ ∥ 2 along gradient EM trajectory. The results are reported in Figure 1 (left and middle panel). Both the likelihood L and the parametric distance converges sub-linearly. Weight configurations. We train 3 -component GMM with 3 -different weight configurations and report 4 runs each configuration in Figure 1 (right). Blue: ( 1 3 , 1 3 , 1 3 ) . Orange: ( 1 6 , 1 3 , 1 2 ) , Green: ( 1 20 , 1 5 , 3 4 ) . More evenly distributed weights result in faster convergence. Initialization geometry. We empirically study the bad initialization point µ (0) described in Theorem 7 2 by plotting the gradient norm at µ (0) w.r.t. different dimension d in Figure 2 (left). As theoretically analyzed, the gradient norm ∥∇L ( µ (0)) ∥ at µ (0) decreases exponentially in dimension d . Statistical rates. The statistical rate for EM/gradient EM is another interesting research problem, which we observe empirically in Figure 2 (right). We run gradient EM on 5 -component GMM with equal weights. x-axis: number of training samples, y-axis: parametric error after convergence. For each sample size, we run 50 times and report the average. The statistical errors are reported in the blue line. The red line (function Θ( n -1 / 4 ) ) and green line (linear regression output fitting blue points) are references. The trajectory approximately follows the law of accuracy ∝ n -1 / 4 . While [Wu and Zhou, 2019] rigorously proves the asymptotic statistical rate of ˜ ( O n -1 / 4 ) for the special 2 To prevent numerical underflow issues, we change the constant 12 in µ (0) to 2 . case of 2-GMMs, our experiments imply that the same rate might also apply to the general case of multi-component GMMs. ## 6 Conclusion This paper gives the first global convergence of gradient EM for over-parameterized Gaussian mixture models when the ground truth is a single Gaussian, and rate is sublinear which is exponentially slower than the rate in the exact-parameterization case. One fundamental open problem is to study when one can obtain global convergence of EM or gradient EM for Gaussian mixture models when the ground truth has multiple components. The likelihood-based convergence framework proposed in this paper might be an helpful tool towards solving this general problem. Acknowledgements This work was supported in part by the following grants: NSF TRIPODS II-DMS 20231660, NSF CCF 2212261, NSF CCF 2007036, NSF AF 2312775, NSF IIS 2110170, NSF DMS 2134106, NSF IIS 2143493, and NSF IIS 2229881. ## References Sivaraman Balakrishnan, Martin J. Wainwright, and Bin Yu. Statistical guarantees for the em algorithm: From population to sample-based analysis, 2014. - Constantinos Daskalakis, Christos Tzamos, and Manolis Zampetakis. Ten steps of em suffice for mixtures of two gaussians. In Satyen Kale and Ohad Shamir, editors, Proceedings of the 2017 Conference on Learning Theory , volume 65 of Proceedings of Machine Learning Research , pages 704710. PMLR, 07-10 Jul 2017. URL https://proceedings.mlr.press/v65/daskalakis17b. html . - Ji Xu, Daniel J. Hsu, and Arian Maleki. Global analysis of expectation maximization for mixtures of two gaussians. In Neural Information Processing Systems , 2016. URL https: //api.semanticscholar.org/CorpusID:6310792 . - Raaz Dwivedi, Nhat Ho, Koulik Khamaru, Martin J. Wainwright, and Michael I. Jordan. Theoretical guarantees for em under misspecified gaussian mixture models. In Neural Information Processing Systems , 2018a. URL https://api.semanticscholar.org/CorpusID:54062377 . - Jeongyeol Kwon and Constantine Caramanis. The em algorithm gives sample-optimality for learning mixtures of well-separated gaussians. In Conference on Learning Theory , pages 2425-2487. PMLR, 2020. - Raaz Dwivedi, Nhat Ho, Koulik Khamaru, Martin J. Wainwright, Michael I. Jordan, and Bin Yu. Sharp analysis of expectation-maximization for weakly identifiable models. In International Conference on Artificial Intelligence and Statistics , 2019. URL https://api.semanticscholar.org/ CorpusID:216036378 . - Chi Jin, Yuchen Zhang, Sivaraman Balakrishnan, Martin J. Wainwright, and Michael I. Jordan. Local maxima in the likelihood of gaussian mixture models: Structural results and algorithmic consequences. In Neural Information Processing Systems , 2016. URL https: //api.semanticscholar.org/CorpusID:3200184 . - Raaz Dwivedi, Nhat Ho, Koulik Khamaru, Michael I. Jordan, Martin J. Wainwright, and Bin Yu. Singularity, misspecification and the convergence rate of em. The Annals of Statistics , 2018b. URL https://api.semanticscholar.org/CorpusID:88517736 . - Bowei Yan, Mingzhang Yin, and Purnamrita Sarkar. Convergence of gradient em on multi-component mixture of gaussians. Advances in Neural Information Processing Systems , 30, 2017. - Lei Xu and Michael I Jordan. On convergence properties of the em algorithm for gaussian mixtures. Neural computation , 8(1):129-151, 1996. - Jinwen Ma, Lei Xu, and Michael I Jordan. Asymptotic convergence rate of the em algorithm for gaussian mixtures. Neural Computation , 12(12):2881-2907, 2000. | Jason M. Klusowski and W. D. Brinda. Statistical guarantees for estimating the centers of a two- component gaussian mixture by em. arXiv: Machine Learning , 2016. URL https://api. semanticscholar.org/CorpusID:88514434 . | |-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Yihong Wu and Harrison H. Zhou. Randomly initialized em algorithm for two-component gaussian mixture achieves near optimality in o ( √ n ) iterations, 2019. | | Nir Weinberger and Guy Bresler. The em algorithm is adaptively-optimal for unbalanced sym- metric gaussian mixtures. J. Mach. Learn. Res. , 23:103:1-103:79, 2021. URL https: //api.semanticscholar.org/CorpusID:232404093 . | | Jeongyeol Kwon, Wei Qian, Constantine Caramanis, Yudong Chen, and Damek Davis. Global convergence of the em algorithm for mixtures of two component linear regression. In Conference on Learning Theory , pages 2055-2110. PMLR, 2019. | | Sanjoy Dasgupta and Leonard J. Schulman. A two-round variant of em for gaussian mixtures. In Proceedings of the 16th Conference on Uncertainty in Artificial Intelligence , UAI '00, page 152-159, San Francisco, CA, USA, 2000. Morgan Kaufmann Publishers Inc. ISBN 1558607099. | | Yudong Chen, Dogyoon Song, Xumei Xi, and Yuqian Zhang. Local minima structures in gaussian mixture models, 2023. | | Ruofei Zhao, Yuanzhi Li, and Yuekai Sun. Statistical convergence of the em algorithm on gaussian mixture models. arXiv preprint arXiv:1810.04090 , 2018. | | Nimrod Segol and Boaz Nadler. Improved convergence guarantees for learning gaussian mixture models by EM and gradient EM. URL http://arxiv.org/abs/2101.00575 . | | Weihang Xu and Simon Du. Over-parameterization exponentially slows down gradient descent for learning a single neuron. In The Thirty Sixth Annual Conference on Learning Theory , pages 1155-1198. PMLR, 2023. | | Frederieke Richert, Roman Worschech, and Bernd Rosenow. Soft mode in the dynamics of over- realizable online learning for soft committee machines. Physical Review E , 105(5):L052302, 2022. | | Nuoya Xiong, Lijun Ding, and Simon S Du. How over-parameterization slows down gradient descent in matrix sensing: The curses of symmetry and initialization. arXiv preprint arXiv:2310.01769 , 2023. | | Jialun Zhang, Salar Fattahi, and Richard Y Zhang. Preconditioned gradient descent for over- parameterized nonconvex matrix factorization. Advances in Neural Information Processing Systems , 34:5985-5996, 2021. | | Jiacheng Zhuo, Jeongyeol Kwon, Nhat Ho, and Constantine Caramanis. On the computational and statistical complexity of over-parameterized matrix sensing. arXiv preprint arXiv:2102.02756 , 2021. | | Charles M. Stein. Estimation of the Mean of a Multivariate Normal Distribution. The Annals of Statistics , 9(6):1135 - 1151, 1981. doi: 10.1214/aos/1176345632. URL https://doi.org/10. 1214/aos/1176345632 . | Yurii Nesterov et al. Lectures on convex optimization , volume 137. Springer, 2018. ## A Missing Proofs and Auxiliary lemmas Proof of Fact 1. It is well known that (see Section 1 of Wu and Zhou [2019]) <!-- formula-not-decoded --> where p µ ( ·| x ) denotes the distribution of hidden variable y (in our case of GMM the index of Gaussian component) conditioned on x , and H denotes information entropy. Since µ ′ = µ is a global minimum of D KL ( p µ ( ·| x ) || p µ ′ ( ·| x )) , we have ∇ D KL ( p µ ( ·| x ) || p µ ( ·| x )) = 0 . Also ∇ H p ( µ ( ·| x )) = 0 since H p ( µ ( ·| x )) is a constant. Therefore <!-- formula-not-decoded --> The proof of Lemma 9 uses ideas from Theorem 1 of Chen et al. [2023] and relies on Stein's identity, which is given by the following lemma. Lemma 16 (Stein [1981]) . For x ∼ N ( µ, σ 2 I d ) and differentiable function g : R d → R we have <!-- formula-not-decoded --> if the two expectations in the above identity exist. Now we are ready to prove Lemma 9. Lemma 9. For any GMM ( µ ) , i ∈ [ n ] , the gradient of Q satisfies <!-- formula-not-decoded --> Proof. Applying Stein's identity (Lemma 16), for each i ∈ [ n ] we have <!-- formula-not-decoded --> Recall that <!-- formula-not-decoded --> The gradient ∇ x ψ i ( x ) could be calculated as ∇ x ψ i ( x ) <!-- formula-not-decoded --> <!-- formula-not-decoded --> note that we used ∑ k ∈ [ n ] ψ i ( x ) = 1 . Then we have <!-- formula-not-decoded --> Proof of Corollary 10. <!-- formula-not-decoded --> <!-- formula-not-decoded --> Lemma 17. For any constant c satisfying 0 &lt; c ≤ 1 3 d , we have <!-- formula-not-decoded --> Proof. Note that E x ∼N (0 ,I d ) [exp ( c ∥ x ∥ )] = M ∥ x ∥ ( c ) is the moment-generating function of ∥ x ∥ . To upper bound the value of a moment generating function at c , we use Lagrange's Mean Value Theorem: <!-- formula-not-decoded --> where ξ ∈ [0 , c ] . Note that M ∥ x ∥ (0) = 1 , So the remaining task is to bound M ′ ∥ x ∥ ( ξ ) . We bound this expectation using truncation method as: <!-- formula-not-decoded --> where V d = π d/ 2 Γ( d/ 2+1) is the volume of d -dimensional unit sphere. Since ∥ x ∥ ≥ 1 ⇒ ∥ ∥ -c x ∥ x ∥ 2 2 ≤ 1 3 d ∥ x ∥ -∥ x ∥ 2 2 ≤ -∥ (1 -1 / (2 d )) x ∥ 2 2 , we have <!-- formula-not-decoded --> where we used ( 2 d 2 d -1 ) d +1 ≤ 4 and the log convexity of Gamma function at the last line. Plugging this back to (10), we get <!-- formula-not-decoded --> Plugging (11) into (9), we obtain the final bound <!-- formula-not-decoded --> <!-- formula-not-decoded --> ̸ Lemma 18. Recall that U = ∑ i ∈ [ n ] ∥ µ i ∥ 2 . For any fixed x ∈ R d , x = 0 and any µ we have <!-- formula-not-decoded --> Proof. <!-- formula-not-decoded --> ## Therefore <!-- formula-not-decoded --> ## B Proofs for Section 3 and 4 ## B.1 Proofs for global convergence analysis Theorem 13. At any two points µ = ( µ ⊤ 1 , . . . , µ ⊤ ⊤ n ) and µ + δ = (( µ 1 + δ 1 ) ⊤ , . . . , ( µ n + δ n ) ⊤ ⊤ ) , if <!-- formula-not-decoded --> then the loss function L satisfies the following smoothness property: for any i ∈ [ n ] we have <!-- formula-not-decoded --> ## Proof. Note that <!-- formula-not-decoded --> Therefore ψ i ( x | µ + δ ) can be bounded as <!-- formula-not-decoded --> <!-- formula-not-decoded --> ## Similarly, we have <!-- formula-not-decoded --> Recall that by Lemma 9 we have ∇ µ i L ( µ ) = E x [ ψ i ( x | µ ) ∑ k ∈ [ n ] ψ k ( x | µ ) µ k ] , so <!-- formula-not-decoded --> where the last inequality is because ψ , ψ i k ≤ 1 and applying (15) and (16). The remaining task is to bound E x [exp (2 ∥ δ i ∥ ∥ ( x ∥ + ∥ µ i ∥ )) -1] . Since 2 ∥ δ i ∥ ≤ 1 3 d , we can use Lemma 17 to bound it as <!-- formula-not-decoded --> where we used exp(1 + x ) ≤ 1 + 2 x, ∀ x ∈ [0 , 1] at the last line. Plugging this back to (17), we get <!-- formula-not-decoded --> Theorem 14. The loss function can be upper bounded as <!-- formula-not-decoded --> Proof. Since the logarithm function is concave, by Jensen's inequality we have <!-- formula-not-decoded --> Lemma 12. For any GMM ( µ ) we have <!-- formula-not-decoded --> Proof. Consider two cases: Case 1. There exists k ∈ [ n ] such that ∥ µ k -µ i max ∥ ≥ µ max 2 . Then by Lemma 19 and Lemma 11 we have <!-- formula-not-decoded --> Case2. For ∀ k ∈ [ n ] , ∥ µ i max -µ k ∥ &lt; µ max 2 . Then by Lemma 20 we have E x [ ∥ ˜ ψ µ ( x ) ∥ 2 ] ≥ 1 4 µ 2 max ≥ Ω(exp( -8 µ 2 max ) µ 4 max ) ≥ Ω(exp( -8 U µ ) 4 max ) ≥ Ω ( exp( -8 U π ) 2 min d (1+ µ max √ d ) 2 µ 4 max ) , (since e -x x ≤ 1 , ∀ x ). Lemma 19. For any GMM ( µ ) , if there exists k ∈ [ n ] such that ∥ µ k -µ i max ∥ ≥ µ max 2 , then we have <!-- formula-not-decoded --> Proof. By Cauchy-Schwarz inequality, we have ∥ a ∥ 2 + ∥ ∥ b 2 ≥ 1 2 ∥ a - ∥ b 2 , so for ∀ i ∈ [ n ] we have <!-- formula-not-decoded --> <!-- formula-not-decoded --> Therefore <!-- formula-not-decoded --> µ where the last inequality is because ∥ µ k -µ i max ∥ ≥ max 2 and ∑ i π i = 1 . Lemma 20. For any GMM ( µ ) , if for ∀ k ∈ [ n ] we have ∥ µ i max -µ k ∥ &lt; µ max 2 , then <!-- formula-not-decoded --> Proof. For any k ∈ [ n ] , by Cauchy-Schwarz inequality we have <!-- formula-not-decoded --> where the last inequality is because ∥ µ i max -µ k ∥ &lt; µ max 2 . Note that (20) implies ⟨ µ , µ k i max ⟩ &gt; 1 2 µ max , so for ∀ x ∈ R d we have <!-- formula-not-decoded --> where we used ∑ k ∈ [ n ] ψ k ( x ) = 1 at the last inequality. Lemma 11. For any GMM ( µ ) we have <!-- formula-not-decoded --> Proof. The key idea is to consider the gradient of ˜ ψ µ , which can be calculated as <!-- formula-not-decoded --> where we used (8) in the second identity. By Cauchy-Schwarz inequality, we have ∥ a ∥ 2 + ∥ ∥ b 2 ≥ 1 2 ∥ a - ∥ b 2 , which implies <!-- formula-not-decoded --> where we used ∂ ∂t ˜ ψ µ ( tx ) = ∇ ˜ ψ µ ( tx x ) at the second to last identity. Careful readers might notice that the term ( ∫ 1 t = -1 ∥ x ∥ · x ⊤ ∇ ˜ ψ µ ( tx x ) d t ) 2 is not well-defined when x = 0 , but we can still calculate its expectation over the whole probability space since the integration is only singular on a zero-measure set. ̸ For each x = 0 , by (22) we have <!-- formula-not-decoded --> So <!-- formula-not-decoded --> where we used Lemma 18 at the fourth line and Cauchy-Schwarz inequality at the last line. The last step is to lower bound E x [ ⟨ µ i -µ , x j ⟩ 2 (1 -exp( -4 µ max ∥ x ∥ )) /µ max ] . Since x is sampled from N (0 , I d ) , which is spherically symmetric, we know that the two random variables { x, ∥ x ∥} are independent. Therefore <!-- formula-not-decoded --> For the first term in (25), we have E x [ ⟨ µ i -µ , x j ⟩ 2 ] = ∥ µ i -µ j ∥ 2 /d since x is spherically symmetrically distributed. By norm-concentration inequality of Gaussian [Dasgupta and Schulman, 2000] we know that Pr [ ∥ x ∥ ≥ √ d 2 ] ≥ 1 / 50 , ∀ d . The second term in (25) can be therefore lower bounded as <!-- formula-not-decoded --> Plugging (26) into (25), we get <!-- formula-not-decoded --> Now we can plug (27) into (24) and get <!-- formula-not-decoded --> where we used the inequality ∀ t ≥ 0 , e -t ≤ 1 1+ t at the second to last line. Theorem 2. Consider training a student n -component GMM initialized from µ (0) = ( µ 1 (0) ⊤ , . . . , µ n (0) ⊤ ⊤ ) to learn a single-component ground truth GMM N (0 , I d ) with population gradient EM algorithm. If the step size satisfies η ≤ O ( exp( -8 U (0)) π 2 min n d 2 2 ( 1 µ max(0) + µ max (0)) 2 ) , then gradient EM converges globally with rate <!-- formula-not-decoded --> where γ = Ω ( η exp( -16 U (0)) π 4 min n d 2 2 (1+ µ max (0) √ dn ) 4 ) ∈ R + . Recall that µ max (0) = max {∥ µ 1 (0) ∥ , . . . , ∥ µ n (0) ∥} and U (0) = ∑ i ∈ [ n ] ∥ µ i (0) ∥ 2 are two initialization constants. Proof. We use mathematical induction to prove Theorem 2, by proving the following two conditions inductively: <!-- formula-not-decoded --> <!-- formula-not-decoded --> Note that (30) directly implies the theorem, so now we just need to prove (29) and (30) together. The induction base for t = 0 is trivial. Now suppose the conditions hold for time step t , consider t +1 . By induction hypothesis (29) we have ∥ µ i ( ) t ∥ ≤ µ max ( ) t ≤ √ nµ max (0) , ∀ t . Proof of (30) . Since ∇ µ Q ( µ µ | ) = ∇ L µ ( µ ) , we can apply classical analysis of gradient descent [Nesterov et al., 2018] as <!-- formula-not-decoded --> Note that the gradient norm can be upper bounded as <!-- formula-not-decoded --> Then for any s ∈ [0 , 1] , we have ∥ sη ∇ µ i L ( µ ( )) t ∥ ≤ ηnµ max (0) ≤ 1 max 6 { d, 2 ∥ µ i ( ) t ∥} . So we can apply Theorem 13 and get <!-- formula-not-decoded --> Therefore for ∀ s ∈ [0 , 1] , <!-- formula-not-decoded --> <!-- formula-not-decoded --> <!-- formula-not-decoded --> Plugging (32) into (31), since η ≤ O ( 1 √ dn 2 ( µ 2 max (0)+1) ) we have <!-- formula-not-decoded --> By Lemma 12 we can lower bound the gradient norm as <!-- formula-not-decoded --> Combining (34) and (33), we have <!-- formula-not-decoded --> Note that the above inequality implies L ( µ ( t +1)) ≤ L ( µ ( )) t , therefore <!-- formula-not-decoded --> On the other hand, by induction hypothesis we have 1 L 2 ( µ ( )) t ≥ γt + 1 L 2 ( µ (0)) , combined with the above inequality, we have 1 L 2 ( µ ( +1)) t ≥ 1 L 2 ( µ ( )) t + γ ≥ γ t ( + 1) + 1 L 2 ( µ (0)) , which finishes the proof of (30). Proof of (29) . The dynamics of potential function U can be calculated as <!-- formula-not-decoded --> By induction hypothesis, the first term I 1 can be bounded by Lemma 12 as <!-- formula-not-decoded --> The second term I 2 is a perturbation term that can be upper bounded by Lemma 9 as I <!-- formula-not-decoded --> where we use triangle inequality twice at the second and third line, and Cauchy-Schwarz inequality twice at the fourth and fifth line. Putting (38), (37) and (36) together, we get <!-- formula-not-decoded --> Consider two cases: <!-- formula-not-decoded --> <!-- formula-not-decoded --> note that we used η ≤ O ( exp( -8 U (0)) π 2 min n d 2 (1+ µ max (0) √ nd ) 2 ) n 2 µ 2 max (0) . b). If U ( µ ( )) t &lt; 1 2 U (0) , then U ( µ ( t +1)) ≤ (1 + η 2 ) U ( µ ( )) t ≤ 2 U ( µ ( )) t ≤ U (0) . Since (29) holds in both cases, our proof is done. ## B.2 Proofs for Section 3.2 Lemma 15. For any µ satisfying ∥ µ 1 ∥ ∥ , µ 2 ∥ ≥ 10 √ d, ∥ µ 3 ∥ , . . . , ∥ µ n ∥ ≤ √ d , the gradient of L at µ can be upper bounded as <!-- formula-not-decoded --> Proof. Recall that the gradient has the form ∇ µ i L ( µ ) = E x [ ψ i ( x ) ∑ k ∈ [ n ] ψ k ( x µ ) k ] , hence its norm can be upper bounded as <!-- formula-not-decoded --> For any ∥ x ∥ ≤ 2 √ d and i &gt; 2 , we have exp( -∥ x -µ i ∥ 2 / 2) ≥ exp( - ∥ ( x ∥ + ∥ µ i ∥ ) 2 / 2) ≥ exp( -9 d/ 2) , while for i ∈ { 1 2 , } , exp( -∥ x -µ i ∥ 2 / 2) ≤ exp( - ∥ ( µ i ∥ - ∥ x ∥ ) 2 / 2) ≤ exp( -(10 √ d -2 √ d ) 2 / 2) = exp( -32 d ) . Since ψ i ( x ) ∝ exp( -∥ x -µ i ∥ 2 / 2) we have <!-- formula-not-decoded --> Therefore the first term in (36) can be bounded as E x [ ∑ k ∈ [ n ] ψ k ( x ) ∥ µ k ∥ ∥ ∣ ∣ ∣ ∣ ∣ x ∥ ≤ 2 √ d ] ≤ ∥ ( µ 3 ∥ + · · · + ∥ µ n ∥ ) + exp( -25 d )( ∥ µ 1 ∥ + ∥ µ 2 ∥ ) . On the other hand, by tail bound of the norm of Gaussian vectors (see Lemma 8 of [Yan et al., 2017]) we have Pr [ ∥ x ∥ &gt; 2 √ d ] ≤ exp( -d ) . Putting everything together, (39) can be further bounded as <!-- formula-not-decoded --> <!-- formula-not-decoded --> <!-- formula-not-decoded --> Theorem 7. For any n ≥ 3 , define ˜(0) µ = ( µ ⊤ 1 (0) , . . . , µ ⊤ n (0)) as follows: µ 1 (0) = 12 √ de , µ 1 2 (0) = -12 √ de , µ 1 3 (0) = · · · = µ n (0) = 0 , where e 1 is a standard unit vector. Then population gradient EM initialized with means ˜(0) µ and equal weights π 1 = . . . = π n = 1 /n will be trapped in a bad local region around ˜(0) µ for exponentially long time T = 1 30 η e d = 1 30 η exp(Θ( U (0))) . More rigorously, for any 0 ≤ t ≤ T, ∃ i ∈ [ n ] such that <!-- formula-not-decoded --> Proof. We prove the following statement inductively: ∀ 0 ≤ t ≤ T : <!-- formula-not-decoded --> <!-- formula-not-decoded --> (40) states that during the gradient EM update, µ 1 will keep stationary at 0 . while the symmetry between µ , . . . , µ 2 n will be preserved. The induction base is trivial. Now suppose (41), (40) holds for 0 1 , , . . . , t , we prove the case for t +1 . Proof of (40) . Due to the induction hypothesis, one can see from direct calculation that ∀ x, we have ψ i ( x | µ ( )) = t ψ i ( - | x µ ( )) t for i = 3 , . . . , n , and ψ 1 ( x | µ ( )) = t ψ 2 ( - | x µ ( )) t . Consequently for ∀ i &gt; 2 we have <!-- formula-not-decoded --> <!-- formula-not-decoded --> = 1 2 E x [ ψ i ( x )( ψ 1 ( x )( µ 1 ( ) + t µ 2 ( )) + t ψ 2 ( x )( µ 2 ( ) + t µ 1 ( )))] = 0 t ⇒ µ it 1 +1) = µ i ( ) = 0 t . Similarly, for µ , µ 1 2 we have <!-- formula-not-decoded --> = E x [ ψ 2 ( -x )( ψ 2 ( -x µ ) 1 + ψ 1 ( -x µ ) 2 )] = -E x [ ψ 2 ( -x )( ψ 2 ( -x µ ) 2 + ψ 1 ( -x µ ) 1 )] = -∇ µ 2 L ( µ ( )) t . This combined with the induction hypothesis implies µ 2 ( t +1) = -µ 1 ( t +1) , (40) is proved. ## Proof of (41) . By induction hypothesis, we have ∀ i, ∥ √ µ i ( ) t -µ i (0) ∥ ≤ ηt · (60 √ de -d ) ≤ ηT · (60 √ de -d ) ≤ 2 √ So ∀ i ∈ { 1 2 , } ∥ , µ i ( ) t ∥ ≤ ∥ µ i (0) ∥ +2 d &lt; 15 √ d . Then by Lemma 15, ∀ i ∈ [ n ] we have d. ∥∇ µ i L ( µ ( )) t ∥ ≤ 2( ∥ µ 3 ∥ + · · · + ∥ µ n ∥ )+2exp( -d )( ∥ µ 1 ∥ + ∥ µ 2 ∥ ) ≤ 4 exp( -d ) 15 · √ d = 60 √ de -d , note that here we used µ 3 ( ) = t · · · = µ n ( ) = 0 t . Therefore by the induction hypothesis we have ∥ µ i ( t +1) -µ i (0) ∥ ≤ ηt · (60 √ de -d ) + η ∥∇ µ i L ( µ ( )) t ∥ ≤ η t ( +1) · (60 √ de -d ) , (41) is proven. By (41), ∀ 0 ≤ t ≤ √ T , for i = 1 2 , we have ∥ µ i ( ) t ∥ ≥ ∥ µ i (0) ∥ - ∥ µ i ( ) t -µ i (0) ∥ ≥ 12 √ d -ηT (60 √ de -d ) ≥ 12 d -2 √ d = 10 √ d. Our proof is done. ## NeurIPS Paper Checklist ## 1. Claims Question: Do the main claims made in the abstract and introduction accurately reflect the paper's contributions and scope? Answer: [Yes] Justification: See the summary of main contributions in Section 1 and main results in Section 3. ## Guidelines: - · The answer NA means that the abstract and introduction do not include the claims made in the paper. - · The abstract and/or introduction should clearly state the claims made, including the contributions made in the paper and important assumptions and limitations. A No or NA answer to this question will not be perceived well by the reviewers. - · The claims made should match theoretical and experimental results, and reflect how much the results can be expected to generalize to other settings. - · It is fine to include aspirational goals as motivation as long as it is clear that these goals are not attained by the paper. ## 2. Limitations Question: Does the paper discuss the limitations of the work performed by the authors? Answer: [Yes] Justification: As a theoretical work, the major assumptions and limitations of our results are presented in the introduction part of Section 1. ## Guidelines: - · The answer NA means that the paper has no limitation while the answer No means that the paper has limitations, but those are not discussed in the paper. - · The authors are encouraged to create a separate "Limitations" section in their paper. - · The paper should point out any strong assumptions and how robust the results are to violations of these assumptions (e.g., independence assumptions, noiseless settings, model well-specification, asymptotic approximations only holding locally). The authors should reflect on how these assumptions might be violated in practice and what the implications would be. - · The authors should reflect on the scope of the claims made, e.g., if the approach was only tested on a few datasets or with a few runs. In general, empirical results often depend on implicit assumptions, which should be articulated. - · The authors should reflect on the factors that influence the performance of the approach. For example, a facial recognition algorithm may perform poorly when image resolution is low or images are taken in low lighting. Or a speech-to-text system might not be used reliably to provide closed captions for online lectures because it fails to handle technical jargon. - · The authors should discuss the computational efficiency of the proposed algorithms and how they scale with dataset size. - · If applicable, the authors should discuss possible limitations of their approach to address problems of privacy and fairness. - · While the authors might fear that complete honesty about limitations might be used by reviewers as grounds for rejection, a worse outcome might be that reviewers discover limitations that aren't acknowledged in the paper. The authors should use their best judgment and recognize that individual actions in favor of transparency play an important role in developing norms that preserve the integrity of the community. Reviewers will be specifically instructed to not penalize honesty concerning limitations. ## 3. Theory Assumptions and Proofs Question: For each theoretical result, does the paper provide the full set of assumptions and a complete (and correct) proof? ## Answer: [Yes] Justification: The settings and assumptions are introduced in Section 1. The complete proof of all theorems are provided in the Appendix. ## Guidelines: - · The answer NA means that the paper does not include theoretical results. - · All the theorems, formulas, and proofs in the paper should be numbered and crossreferenced. - · All assumptions should be clearly stated or referenced in the statement of any theorems. - · The proofs can either appear in the main paper or the supplemental material, but if they appear in the supplemental material, the authors are encouraged to provide a short proof sketch to provide intuition. - · Inversely, any informal proof provided in the core of the paper should be complemented by formal proofs provided in appendix or supplemental material. - · Theorems and Lemmas that the proof relies upon should be properly referenced. ## 4. Experimental Result Reproducibility Question: Does the paper fully disclose all the information needed to reproduce the main experimental results of the paper to the extent that it affects the main claims and/or conclusions of the paper (regardless of whether the code and data are provided or not)? Answer: [Yes] Justification: We give the details of our synthetic experiments. ## Guidelines: - · The answer NA means that the paper does not include experiments. - · If the paper includes experiments, a No answer to this question will not be perceived well by the reviewers: Making the paper reproducible is important, regardless of whether the code and data are provided or not. - · If the contribution is a dataset and/or model, the authors should describe the steps taken to make their results reproducible or verifiable. - · Depending on the contribution, reproducibility can be accomplished in various ways. For example, if the contribution is a novel architecture, describing the architecture fully might suffice, or if the contribution is a specific model and empirical evaluation, it may be necessary to either make it possible for others to replicate the model with the same dataset, or provide access to the model. In general. releasing code and data is often one good way to accomplish this, but reproducibility can also be provided via detailed instructions for how to replicate the results, access to a hosted model (e.g., in the case of a large language model), releasing of a model checkpoint, or other means that are appropriate to the research performed. - · While NeurIPS does not require releasing code, the conference does require all submissions to provide some reasonable avenue for reproducibility, which may depend on the nature of the contribution. For example - (a) If the contribution is primarily a new algorithm, the paper should make it clear how to reproduce that algorithm. - (b) If the contribution is primarily a new model architecture, the paper should describe the architecture clearly and fully. - (c) If the contribution is a new model (e.g., a large language model), then there should either be a way to access this model for reproducing the results or a way to reproduce the model (e.g., with an open-source dataset or instructions for how to construct the dataset). - (d) We recognize that reproducibility may be tricky in some cases, in which case authors are welcome to describe the particular way they provide for reproducibility. In the case of closed-source models, it may be that access to the model is limited in some way (e.g., to registered users), but it should be possible for other researchers to have some path to reproducing or verifying the results. ## 5. Open access to data and code Question: Does the paper provide open access to the data and code, with sufficient instructions to faithfully reproduce the main experimental results, as described in supplemental material? Answer: [No] Justification: We only run a small-scale experiment to verify an optimization phenomenon in our theory. ## Guidelines: - · The answer NA means that paper does not include experiments requiring code. - · Please see the NeurIPS code and data submission guidelines ( https://nips.cc/ public/guides/CodeSubmissionPolicy ) for more details. - · While we encourage the release of code and data, we understand that this might not be possible, so 'No' is an acceptable answer. Papers cannot be rejected simply for not including code, unless this is central to the contribution (e.g., for a new open-source benchmark). - · The instructions should contain the exact command and environment needed to run to reproduce the results. See the NeurIPS code and data submission guidelines ( https: //nips.cc/public/guides/CodeSubmissionPolicy ) for more details. - · The authors should provide instructions on data access and preparation, including how to access the raw data, preprocessed data, intermediate data, and generated data, etc. - · The authors should provide scripts to reproduce all experimental results for the new proposed method and baselines. If only a subset of experiments are reproducible, they should state which ones are omitted from the script and why. - · At submission time, to preserve anonymity, the authors should release anonymized versions (if applicable). - · Providing as much information as possible in supplemental material (appended to the paper) is recommended, but including URLs to data and code is permitted. ## 6. Experimental Setting/Details Question: Does the paper specify all the training and test details (e.g., data splits, hyperparameters, how they were chosen, type of optimizer, etc.) necessary to understand the results? Answer: [Yes] Justification: We give the details about our synthetic experiment. ## Guidelines: - · The answer NA means that the paper does not include experiments. - · The experimental setting should be presented in the core of the paper to a level of detail that is necessary to appreciate the results and make sense of them. - · The full details can be provided either with the code, in appendix, or as supplemental material. ## 7. Experiment Statistical Significance Question: Does the paper report error bars suitably and correctly defined or other appropriate information about the statistical significance of the experiments? Answer: [No] Justification: This paper focuses on the optimization aspect. Our experiment shows the optimization phenomenon on synthetic data, and we do not study the statistical aspect. ## Guidelines: - · The answer NA means that the paper does not include experiments. - · The authors should answer "Yes" if the results are accompanied by error bars, confidence intervals, or statistical significance tests, at least for the experiments that support the main claims of the paper. - · The factors of variability that the error bars are capturing should be clearly stated (for example, train/test split, initialization, random drawing of some parameter, or overall run with given experimental conditions). - · The method for calculating the error bars should be explained (closed form formula, call to a library function, bootstrap, etc.) - · The assumptions made should be given (e.g., Normally distributed errors). - · It should be clear whether the error bar is the standard deviation or the standard error of the mean. - · It is OK to report 1-sigma error bars, but one should state it. The authors should preferably report a 2-sigma error bar than state that they have a 96% CI, if the hypothesis of Normality of errors is not verified. - · For asymmetric distributions, the authors should be careful not to show in tables or figures symmetric error bars that would yield results that are out of range (e.g. negative error rates). - · If error bars are reported in tables or plots, The authors should explain in the text how they were calculated and reference the corresponding figures or tables in the text. ## 8. Experiments Compute Resources Question: For each experiment, does the paper provide sufficient information on the computer resources (type of compute workers, memory, time of execution) needed to reproduce the experiments? Answer: [No] Justification: Our experiment only shows the phenomenon on small-scale synthetic data, so we did not record the computation resource used. ## Guidelines: - · The answer NA means that the paper does not include experiments. - · The paper should indicate the type of compute workers CPU or GPU, internal cluster, or cloud provider, including relevant memory and storage. - · The paper should provide the amount of compute required for each of the individual experimental runs as well as estimate the total compute. - · The paper should disclose whether the full research project required more compute than the experiments reported in the paper (e.g., preliminary or failed experiments that didn't make it into the paper). ## 9. Code Of Ethics Question: Does the research conducted in the paper conform, in every respect, with the NeurIPS Code of Ethics https://neurips.cc/public/EthicsGuidelines ? Answer: [Yes] Justification: We conform, in every respect, with the NeurIPS Code of Ethics. ## Guidelines: - · The answer NA means that the authors have not reviewed the NeurIPS Code of Ethics. - · If the authors answer No, they should explain the special circumstances that require a deviation from the Code of Ethics. - · The authors should make sure to preserve anonymity (e.g., if there is a special consideration due to laws or regulations in their jurisdiction). ## 10. Broader Impacts Question: Does the paper discuss both potential positive societal impacts and negative societal impacts of the work performed? Answer: [Yes] Justification: This is a theoretical work. It societal impact lies within its potential pratical applications. ## Guidelines: - · The answer NA means that there is no societal impact of the work performed. - · If the authors answer NA or No, they should explain why their work has no societal impact or why the paper does not address societal impact. - · Examples of negative societal impacts include potential malicious or unintended uses (e.g., disinformation, generating fake profiles, surveillance), fairness considerations (e.g., deployment of technologies that could make decisions that unfairly impact specific groups), privacy considerations, and security considerations. - · The conference expects that many papers will be foundational research and not tied to particular applications, let alone deployments. However, if there is a direct path to any negative applications, the authors should point it out. For example, it is legitimate to point out that an improvement in the quality of generative models could be used to generate deepfakes for disinformation. On the other hand, it is not needed to point out that a generic algorithm for optimizing neural networks could enable people to train models that generate Deepfakes faster. - · The authors should consider possible harms that could arise when the technology is being used as intended and functioning correctly, harms that could arise when the technology is being used as intended but gives incorrect results, and harms following from (intentional or unintentional) misuse of the technology. - · If there are negative societal impacts, the authors could also discuss possible mitigation strategies (e.g., gated release of models, providing defenses in addition to attacks, mechanisms for monitoring misuse, mechanisms to monitor how a system learns from feedback over time, improving the efficiency and accessibility of ML). ## 11. Safeguards Question: Does the paper describe safeguards that have been put in place for responsible release of data or models that have a high risk for misuse (e.g., pretrained language models, image generators, or scraped datasets)? Answer: [NA] Justification: This is a theoretical work and has no such risks. ## Guidelines: - · The answer NA means that the paper poses no such risks. - · Released models that have a high risk for misuse or dual-use should be released with necessary safeguards to allow for controlled use of the model, for example by requiring that users adhere to usage guidelines or restrictions to access the model or implementing safety filters. - · Datasets that have been scraped from the Internet could pose safety risks. The authors should describe how they avoided releasing unsafe images. - · We recognize that providing effective safeguards is challenging, and many papers do not require this, but we encourage authors to take this into account and make a best faith effort. ## 12. Licenses for existing assets Question: Are the creators or original owners of assets (e.g., code, data, models), used in the paper, properly credited and are the license and terms of use explicitly mentioned and properly respected? Answer: [NA] Justification: This is a theoretical work and does not use existing assets. ## Guidelines: - · The answer NA means that the paper does not use existing assets. - · The authors should cite the original paper that produced the code package or dataset. - · The authors should state which version of the asset is used and, if possible, include a URL. - · The name of the license (e.g., CC-BY 4.0) should be included for each asset. - · For scraped data from a particular source (e.g., website), the copyright and terms of service of that source should be provided. - · If assets are released, the license, copyright information, and terms of use in the package should be provided. For popular datasets, paperswithcode.com/datasets has curated licenses for some datasets. Their licensing guide can help determine the license of a dataset. - · For existing datasets that are re-packaged, both the original license and the license of the derived asset (if it has changed) should be provided. - · If this information is not available online, the authors are encouraged to reach out to the asset's creators. ## 13. New Assets Question: Are new assets introduced in the paper well documented and is the documentation provided alongside the assets? Answer: [NA] Justification: As a theoretical work, we does not release such new assets. ## Guidelines: - · The answer NA means that the paper does not release new assets. - · Researchers should communicate the details of the dataset/code/model as part of their submissions via structured templates. This includes details about training, license, limitations, etc. - · The paper should discuss whether and how consent was obtained from people whose asset is used. - · At submission time, remember to anonymize your assets (if applicable). You can either create an anonymized URL or include an anonymized zip file. ## 14. Crowdsourcing and Research with Human Subjects Question: For crowdsourcing experiments and research with human subjects, does the paper include the full text of instructions given to participants and screenshots, if applicable, as well as details about compensation (if any)? Answer: [NA] Justification: We do not involve crowdsourcing nor research with human subjects. ## Guidelines: - · The answer NA means that the paper does not involve crowdsourcing nor research with human subjects. - · Including this information in the supplemental material is fine, but if the main contribution of the paper involves human subjects, then as much detail as possible should be included in the main paper. - · According to the NeurIPS Code of Ethics, workers involved in data collection, curation, or other labor should be paid at least the minimum wage in the country of the data collector. ## 15. Institutional Review Board (IRB) Approvals or Equivalent for Research with Human Subjects Question: Does the paper describe potential risks incurred by study participants, whether such risks were disclosed to the subjects, and whether Institutional Review Board (IRB) approvals (or an equivalent approval/review based on the requirements of your country or institution) were obtained? Answer: [NA] Justification: We do not involve crowdsourcing nor research with human subjects. ## Guidelines: - · The answer NA means that the paper does not involve crowdsourcing nor research with human subjects. - · Depending on the country in which research is conducted, IRB approval (or equivalent) may be required for any human subjects research. If you obtained IRB approval, you should clearly state this in the paper. - · We recognize that the procedures for this may vary significantly between institutions and locations, and we expect authors to adhere to the NeurIPS Code of Ethics and the guidelines for their institution. - · For initial submissions, do not include any information that would break anonymity (if applicable), such as the institution conducting the review.
zv4UISZzp5
IDGen: Item Discrimination Induced Prompt Generation for LLM Evaluation
As Large Language Models (LLMs) become more capable of handling increasingly complex tasks, the evaluation set must keep pace with these advancements to ensure it remains sufficiently discriminative. Item Discrimination (ID) theory, which is widely used in educational assessment, measures the ability of individual test items to differentiate between high and low performers. Inspired by this theory, we propose an ID-induced prompt synthesis framework for evaluating LLMs so that the evaluation set continually updates and refines according to model abilities. Our data synthesis framework prioritizes both breadth and specificity. It can generate prompts that comprehensively evaluate the capabilities of LLMs while revealing meaningful performance differences between models, allowing for effective discrimination of their relative strengths and weaknesses across various tasks and domains. To produce high-quality data, we incorporate a self-correct mechanism into our generalization framework and develop two models to predict prompt discrimination and difficulty score to facilitate our data synthesis framework, contributing valuable tools to evaluation data synthesis research. We apply our generated data to evaluate five SOTA models. Our data achieves an average score of 51.92, accompanied by a variance of 10.06. By contrast, previous works (i.e., SELF-INSTRUCT and WizardLM) obtain an average score exceeding 67, with a variance below 3.2. The results demonstrate that the data generated by our framework is more challenging and discriminative compared to previous works. We will release a dataset of over 3,000 carefully crafted prompts to facilitate evaluation research of LLMs.
https://openreview.net/pdf/74ed0078ffe00fb63ba32cc447f4540054349fbb.pdf
[ { "confidence": 3, "rating": 7, "review_id": "ecG0hpo8bm", "review_text": "This paper proposes a method of generating prompts for evaluating large language models such that the prompts are dynamic and allow for showing meaningful performance gaps between different language models.The authors show that the generated data is more-challenging and discriminative than prior datasets.\n\n- Work is very timely and addresses a major issue in how we can better evaluate LLMs which are continuously improving and saturating existing benchmarks.\n- Good to see that the generated prompts are indeed harder than baseline datasets - this should indicate that the prompts are challenging enough to provide decent signal on a language model's capabilities.\n- Experimented with many SOTA models and compared with several baseline datasets.\n\nThe main weakness of this work is that much of the pipeline relies prompting language models to modify seed data. This means that the performance of the language model plays a huge role in the quality of the resulting data. Given that the pipeline seems to have many different steps, each of these steps can introduce errors since LLMs are not fully reliable. It then becomes crucial to have a way of verifying that the generated questions are of high quality. There's also a concern that the ground truth answers might not be entirely accurate. The authors mention both of these issues as limitations.\n\n- If a particular language model is used to generate data using the proposed method, is there any bias where that model will perform better at solving those problems? For example, if Claude generates the prompt set, will the prompt set be easier for Claude than GPT?\n- Is the data generation done for a set of language models or for each individual language model? In other words, are the prompts being dynamically changed with respect to a single language model's response or all language model responses? Specifically, Section 2.2 says that the method \"rephrases the question based on the response from the LLM\" - which LLM is this statement referring to?\n- Are there any experiments to verify the robustness of each individual step in the pipeline? It seems like the current experiments are meant to verify the final output of the pipeline, not the in-between steps." }, { "confidence": 3, "rating": 4, "review_id": "kLpxn5sGzh", "review_text": "The paper proposes a prompt synthesis framework for evaluating LLMs to accurately reflect different Large Language Model abilities. The authors develop two models to measure LLMs’ question discriminative power and difficulty. This study presents “instruction gradient” and “response gradient” methods to exploit rule sets to generalize questions.\n\nThe paper focuses on the generation of a large number of queries and corresponding answers on general language and mathematical topics. They have released a set of over 3000 questions for LLM evaluation. Their proposed metrics (discrimination index and difficulty score) show significant improvement in the quality of the benchmark datasets.\n\nAlthough the paper tries to solve a crucial research area in the scope of LLM evaluation, the study lacks in many different ways. The textual flow is difficult to follow. Many of the concepts introduced were not properly described or not cited with previous work’s references. These issues restricted the reviewability of this study.\n\n1. The proposed methods - “Instruction gradient” and “response gradient” are not properly described in the manuscript. Authors should write the working procedure of these methods in detail in the main manuscript, as these are the centerpiece of the whole question generation process.\n\n 2. “Generalizing questions from seed data based on the \"instruction gradient\" restricts the diversity and confines the content to specific topics” - is unclear. Consider explaining.\n\n 3. In section 2.3 - Assessing the Usability of General Text Questions: How is the assessment done? Is it done manually with human input? Or by an autonomic process/model?\n 4. In section 2.3 - CoT Check for Mathematical Questions: “we use Hunyuan to assess the reasonableness of the question, which successfully identifies the unreasonableness of the problem and corrects it based on the assessment process.” - How can it be ensured that the model successfully identifies the unreasonableness? Provide a theoretical/experimental study.\n 5. In section 2.4 - Acquiring reference answers: lines 133-136, are the answers scored by human participants?\n 6. In section 2.4 - Acquiring reference answers: line 140, What is meant by a “collective voting mechanism”? Please explain clearly.\n 7. In section 2.5 - lines 148-149, what are “label discrimination indexes”?\n a. In line 149, “the prompt includes four features” - How did you select these features? Provide some analysis.\n b. In lines 162-164, How did you select the threshold values? (e.g., “Low” means less than or equal to 0.1, “High” means values greater than 0.25, etc.). \n c. In line 168, “discrimination level label ranging from 0-3” - Is this range acquired by observations? Or have you performed some analyses on the score expressions?\n 8. In equation 4, what does the “score” mean? Is it the evaluation score that is depicted in Table 1?\n a. If you are using the same “score” to calculate the difficulty score and the discrimination indexes, does that mean a question is more difficult if a question is more discriminative?" }, { "confidence": 2, "rating": 4, "review_id": "UZwVXcmL62", "review_text": "The paper introduces a novel framework for evaluating Large Language Models LLMs) based on Item Discrimination ID theory, which generates adaptive, high- quality prompts to effectively differentiate model performance. Key contributions include a dynamic evaluation set that evolves with LLM advancements, a self- correct mechanism for prompt precision, and models to estimate prompt discrimination and difficulty. The authors validate their framework by testing it on five state-of-the-art models and release a dataset of over 3,000 prompts to aid further research, demonstrating enhanced challenge and discrimination over previous methods.\n\nThe paper proposes a novel prompt generation method to produce more challenging evaluation data.\nThe paper is well-structured and clearly written. The methodology and evaluation criteria are explained clearly, making the paper accessible to a broad audience.\n\nThe paper only used one LLM Hunyuan) to generalize data and did not verify whether the proposed method can generalize to other LLMs.\nIt is debatable whether using test data generated by an LLM to evaluate the performance of LLMs has practical value. The paper lacks validation of the effectiveness of the machine-generated test set, such as comparing its metrics with those of other human-annotated datasets.\nThe paper lacks an analysis of the diversity of the data used to produce the test set.\n\nThe concerns are included in the weaknesses." } ]
## IDGen: Item Discrimination Induced Prompt Generation for LLM Evaluation Fan Lin , Shuyi Xie , Yong Dai , Wenlin Yao 2 , Tianjiao Lang 2 , Yu Zhang 1 † . 1 SouthEast University, Nanjing, China, 2 Tencent, Shenzhen, China 1 2 , ∗ 2 ∗ 2 ∗ ## Abstract As Large Language Models (LLMs) grow increasingly adept at managing complex tasks, the evaluation set must keep pace with these advancements to ensure it remains sufficiently discriminative. Item Discrimination (ID) theory, which is widely used in educational assessment, measures the ability of individual test items to differentiate between high and low performers. Inspired by this theory, we propose an ID-induced prompt synthesis framework for evaluating LLMs to ensure the evaluation set can continually update and refine according to model abilities. Our data synthesis framework prioritizes both breadth and specificity. It can generate prompts that comprehensively evaluate the capabilities of LLMs while revealing meaningful performance differences between models, allowing for effective discrimination of their relative strengths and weaknesses across various tasks and domains. To produce high-quality data, we incorporate a self-correct mechanism into our generalization framework, and develop two models to predict prompt discrimination and difficulty score to facilitate our data synthesis framework, contributing valuable tools to evaluation data synthesis research. We apply our generated data to evaluate five SOTA models. Our data achieves an average score of 51.92, accompanied by a variance of 10.06. By contrast, previous works (i.e., SELF-INSTRUCT and WizardLM) obtain an average score exceeding 67, with a variance below 3.2. The results demonstrate that the data generated by our framework is more challenging and discriminative compared to previous works. We will release a dataset of over 3,000 carefully crafted prompts to facilitate evaluation research of LLMs. 3 ## 1 Introduction The rapid advancement of LLMs, such as OpenAI's ChatGPT, Anthropic's Claude [1], and Facebook's LLaMA series [2, 3], has revolutionized the field of Natural Language Processing (NLP) in recent years. Model evaluation plays a crucial role in the development of LLMs, as it guides the iterative improvements during training, enables the selection of the best model variations, and facilitates their deployment in real-world applications [4, 5]. Recognizing the importance of model evaluation, researchers have made great efforts to create comprehensive benchmarks. Many of these benchmarks consist of multiple-choice questions in English [6, 7], as the results are easily obtainable through string matching. Some researchers [8] have extended these datasets to non-English languages, adapting the content to new linguistic and cultural contexts through translation. These datasets often result from either extensive public data collection or through manual or model-assisted data synthesis processes. ∗ Equal contribution. Work done during the internship of Lin at Tencent. † Corresponding author. 3 Code and data are available at https://github.com/DUTlf/IDGen.git Despite these advances, existing evaluation frameworks exhibit crucial limitations, particularly in their ability to discriminate between LLMs of varying capabilities. The predominant use of multiple-choice questions restricts the evaluation to specific competencies, potentially overlooking the full generative potential of LLMs, including their instruction-following ability. Merely translating prompts from one language to another language may not adequately demonstrate a model's proficiency within a specific cultural context. Furthermore, current generation methods lack a comprehensive mechanism to ensure the correctness of the generated questions, which is especially important for producing mathematic questions. More importantly, the evaluation set should evolve adaptively as LLMs' abilities improve to ensure it remain sufficiently discriminative. As LLMs become more capable of handling increasingly complex tasks, the evaluation set must keep pace with these advancements. Static evaluation sets may be ineffective in differentiating between the performance of various LLMs. To maintain the discriminative power of the evaluation set, it is essential to continually update and refine the questions and tasks according to model abilities. This involves incorporating new challenges that push the boundaries of LLMs' abilities, such as more difficult reasoning, deeper understanding of context, and generating coherent responses to complex instructions. By adaptively updating the evaluation set in the development of LLMs, we can ensure that the benchmarks keep providing valuable insights into the strengths and weaknesses of different models. To address these challenges, we propose a robust framework to produce high-quality, discriminative test data that evolves in alignment with advancements in LLM capabilities. Our framework is inspired by Item Discrimination (ID) Theory [9] that is introduced to assess how well individual questions (items) on a test distinguish between students who perform well on the overall test and those who do not. We adopt ID Theory to ensure each test question's effectiveness in differentiating between higher and lower-ability LLMs. Our framework can generate open-ended questions automatically in both English and Chinese, aimed at capturing a wide spectrum of tasks. Central to our approach is the application of discriminative techniques that enhance the test sets' ability to distinguish between different levels of language understanding, thereby allowing for a more precise evaluation of LLM performance. To achieve this goal, we also introduce two key metrics: question discriminative power and question difficulty, and train corresponding models to measure them. Additionally, we establish an iterative verification process to guarantee the logical soundness and precision of our questions. This multi-round iterative process can better enhance the usability of questions with logical coherence. Our contributions can be summarized as follows: - · We propose a framework for data production and generalization that enables the rapid and high-quality creation of test datasets capable of effectively testing and differentiating LLMs. - · We innovatively adopt discrimination as the guiding principle for data production and generalization, employing rigorous data correction methods throughout the entire data production process to ensure the generated data has high usability and quality. - · We release a comprehensive set of over 3,000 questions, created and refined through our rigorous iterative verification process, to support and enrich the community's resources for LLM evaluation. - · We develop and train two models to measure question discriminative power and difficulty, which we have made available to the open-source community. ## 2 Method In this section, we demontstrate our generalization framework in Figure 1. Assuming We have meticulously handcrafted a batch of high-quality seed data, the first thing is to exploit "instruction gradient" , i.e., specially designed rules from the instruction perspective, to generalize the questions 4 (in Section 2.1). Subsequently, we employ "response gradient" to generalize questions, where the "gradient" refers to the rules for generalizing questions based on LLMs' responses (in Section 2.2). Next, we discuss a self-correct method to rectify the generalization questions, enhancing the usability 4 Inspired by [10], the generalization of instruction data is similar to forward propagation in gradient descent while generalizing instructions by asking questions about the model's response is similar to backpropagation. Therefore, we name our methods as "instruction gradient" and "response gradient" respectively. Figure 1: Self-Correct Instruction Generalization Framework with "Instruction Gradient" . Firstly, we handcraft a batch of seed data, dividing it into math category and general text category. Next, we generate a batch of dataset through "instruction gradient". For instructions in the general text category, we generate responses using a LLM, then generate new instructions through "response gradient", i.e., propose new questions based on the response. For problems in the math category, we check them through CoT check, and apply self-correct according to the CoT check's feedback. <!-- image --> of these data (in Section 2.3). Finally, we illustrate how we get high-quality answers from LLMs (in Section 2.4). To ensure the discrimitive power of the generation evaluation set, we propose to train an discrimination estimation model and an difficulty estimation model to formulate two metrics (in Section 2.5 and 2.6). ## 2.1 Data Generalization Based on "Instruction Gradient" From the perspective of instruction, we aim to design constraints to guide the generated content, ensuring the generated questions adhere to specified content and also possess diversity and distinctiveness. Inspired by previous work, we refer to this feedback from the instruction perspective as "instruction gradient". We apply Hunyuan for data generalization 5 A.2. Since different types of data require distinct generalization techniques, we create various methods tailored to different categories of data [11]. We systematically develop several strategies that enhance both the difficulty and the discriminative power of the generated questions. In our study, we delineate 12 strategies tailored for addressing general text questions, such as "restricting the language used in responses", and formulate 8 distinct strategies for tackling mathematical questions, including "introduce additional variables". A comprehensive enumeration of these methodologies is presented in Appendix Table 5. In the data production process, for general text questions, we select 1-3 suitable generalization strategies. This approach aims to increase the complexity and differentiation of the generated questions, making them richer and more diverse. In contrast, for mathematical questions, we randomly select a single strategy. This choice helps to minimize the risk of generating unusable questions and ensures consistency in the problem generation process. ## 2.2 Instruction Generalization Reliant on "Response Gradient" Generalizing questions from seed data based on the "instruction gradient" restricts the diversity and confines Especiallynt to specific topics. To enhance the diversity of general evaluation questions, we adopt a two-pronged approach. Firstly, we ensure overall diversity by expanding the variety of seed data. Secondly, we amplify question diversity by leveraging the "response gradient." 5 Unless otherwise specified, all data in this document are generated by Hunyuan (Hunyuan-standard), which is a Large Language Model developed by Tencent. Additionally, data generated using other LLMs are also employed, and the relevant experiments and analyses are provided in the Appendix For general text questions, we rephrase the question based on the response from the LLM. Specifically, we append a brief instruction to the question, which serves to guide the LLM in generating responses with more comprehensive information. After acquiring additional information, we generate new questions based on them. However, to ensure the difficulty and discrimination of the data, we embed a reference question in the prompt. We present the instruction that guides more information for the LLM and the prompt rephrasing questions based on response information in Appendix Table 6 and Table 7. For example, for the question "How can NLP technology be used to detect and prevent the spread of fake news?", using the instruction gradient for generalization, we can obtain a new question "List three specific methods to detect and prevent the spread of fake news using NLP technology and explain their principles," which still revolves around the original question for expansion or transformation. To address this, we consider discarding the original question and using the LLM-generated response as information or knowledge. At this point, we only generate questions based on a piece of text, and the questions may become more interesting based on the content of the response. In the above example, we could generate a new question "What NLP tasks are typically addressed by fact-checking and source analysis techniques?" ## 2.3 Evaluating Question Usability Assessing the Usability of General Text Questions Inspired by the methodologies outlined in [10], we craft a comprehensive set of evaluation criteria encompassing safety, neutrality, integrity and feasibility. These criteria are important in assessing the suitability of general text questions for our purposes. The detailed descriptions of these evaluative measures are presented in Appendix Table 8. We consider a question to be unusable if it fails to meet any of these criteria. CoT Check for Mathematical Questions For mathematical questions, it is insufficient to estimate whether the generalization question is reasonable or not using a simple instruction for an LLM. As depicted in Figure2, consider the question "There are ten red, yellow, and blue balls in a box. You wish to draw a ball at random from the box. What are the chances of drawing a red ball?". The generated question includes two conditions, "totaling 30 balls" and "the probability of drawing a yellow ball is 1/4", which leads to a result of 7.5 yellow balls. This result contradicts common sense because there should not be a "half" yellow ball. Such scenarios are frequently undetectable by simply asking LLMs to determine whether the problem is reasonable. Inspired by CoT (Chain of Thought) [12], we come up with a CoT-based approach to check whether generated questions are reasonable or not. Specially, we start with the concepts and move on to analyze each element of the problem, ensuring the rationality and precision of mathematical questions by assessing logical connections, solvability, and meticulously examining assumptions and calculation outcomes in the present context. The details are depicted in Appendix Table 10. Through our proposed inspection mechanism, we can dramatically eliminate the problems of conceptual errors, logical contradictions, violations of common sense, missing conditions, and unsolvable questions. In the example shown in Figure 2, we use Hunyuan to assess the reasonableness of the question, which successfully identifies the unreasonableness of the problem and corrects it based on the assessment process. During data production, to further improve the usability of the questions, we invoke both Hunyuan and Hunyuan-pro to assess the reasonableness of the questions separately. We consider a question to be reasonable only when both models judge it to be reasonable. ## 2.4 Acquiring reference answers To ensure the highest quality of responses for general inquiries, we adopt a sophisticated multi-model strategy. We use five SOTA LLM models: Hunyuan, GPT-4, GPT4-Turbo, Wenxin 4, and Qwen 6 to generate preliminary answers independently. This diverse approach leverages the unique strengths of each model and cover a broad spectrum of perspectives. For general text questions, inspired by [13], we design each response from the following perspectives: Safety (0-30 points), Correctness (0-10 points), Relevance (0-10 points), Comprehensiveness (0-10 points), Readability (0-20 points), Richness (0-10 points), and Humanization (0-10 points). Hunyuan scores each response according to 6 In this document, GPT-4 refers to gpt-4-32k-0613, GPT-4 Turbo refers to gpt-4-turbo-2024-04-09, Wenxin 4 refers to ERNIE-Bot 4.0, Qwen refers to Qwen-Max, Claude 3 refers to claude-3-opus-20240229. Figure 2: Chain of Thought Check Illustrated with a Mathematical Question Example <!-- image --> these criteria to maintain a high standard of consistency and fairness. The response with the highest score from Hunyuan is then selected as the reference answer and is used to establish a benchmark. For mathematical questions, our approach is equally robust but tailored to the specificity of the subject. The most accurate response is determined through a collective voting mechanism 7 involving three models: Hunyuan, GPT-4 Turbo, and Qwen. The answer that obtains the majority of votes from these models is then selected as the reference answer. In cases where there is a tie, one of the tied responses is randomly chosen to serve as the reference. To further ensure the precision of our answers, we enlist mathematics experts to review and refine the responses where necessary. This step is crucial to validate the accuracy and dependability of the answers we provide. ## 2.5 Discrimination Estimation Model To facilitate data synthesis and ensure new data are discriminative enough, we train a model to measure discrimination of each data instance. Each training instance includes prompts and its label discrimination indexes. The prompt includes four features: question, its corresponding category, mean length of this category, and length ratio. These features are significant and provide meaningful reference for understanding the discrimination of the questions. We apply a five-point rating system to score each response from different models and obtain the discrimination indexes. The specific scoring criteria can be seen in Table 1. Table 1: Score Evaluation Criteria. | Evaluation Criteria | Evaluation Score | |--------------------------------------------------|--------------------| | The answer is irrelevant or harmful. | 0 | | The answer is wrong or contains factual errors. | 1 | | The answer is correct but the process has flaws. | 2 | | The answer is right. | 3 | | The answer exceeds expectations. | 4 | Refer to the discrimination indexes proposed by T.L.Kelley [15] in education studies, we design a calculation formula for discrimination indexes by utilizing the evaluation data derived from several models including GPT-4, ChatGPT, Wenxin 4, and Qwen. Regarding the same question, arrange each model's average score in a descending order. The average score for the top 50% is denoted as PH, while the average score for the bottom 50% is indicated as PL. The computation of the discrimination indexes is articulated by the following formula: 7 We hope to select high-quality responses as reference answers as much as possible. Related work[14] studies the theoretical basis of the collective voting mechanism and discusses the impact of different voting methods on social welfare. Inspired by this, we introduce a "collective voting mechanism" to select reference answers by comparing and voting among multiple responses. <!-- formula-not-decoded --> <!-- formula-not-decoded --> <!-- formula-not-decoded --> where N is the number of models, M is the total number of evaluators, score ik is the k-th evaluator's score for the i-th evaluation model's answer, and max\_score is the highest score of the evaluation (in our scoring system, the max\_score is 4). We map the discrimination indexes to four levels: "Low" for values less than or equal to 0.1, "Relatively Low" for values greater than 0.1 but less than or equal to 0.15, "Relatively High" for values greater than 0.15 but less than or equal to 0.25, and "High" for values greater than 0.25. The threshold here is estimated based on the distribution of 100,000-level evaluation data. Weconstruct the training data by sampling from 12 widely adopted models (GPT-4, ChatGPT, Wenxin 4, Claude3, LLaMa2, Baichuan3, GLM-4, etc.). A training sample includes information such as the question, category, reference answer, and the ratio of the question length to the average length of its category, etc. The expected label is a discrimination level label ranging from 0-3, which implies superior discrimination when the number is high. Then, Baichuan2-13B is used as the backbone to be supervised and finetuned as a discrimination model. To more accurately obtain the discriminative power of the dataset, we calculate the discrimination indexes through manual annotation. Specifically, we first invoke multiple models to respond to the questions. Then we engage relevant experts to score the responses of various models according to Table 1. Subsequently, we calculate the discrimination indexes for each sample using Formula 3 and then determine the average value across all samples to obtain the discrimination indexes for this batch of data. ## 2.6 Difficulty Estimation Model In our research, we utilize the "difficulty level" metric to assess a dataset's ability to differentiate various model by categorizing data into varying levels of difficulty. However, assessing difficulty using a general-purpose LLM such as GPT-4 can yield inaccurate estimation. Moreover, manually annotating the difficulty level of each instance is time-consuming and labor-intensive, and there's often a discrepancy between the difficulty perceived by humans and the difficulty perceived by models. To address these challenges, we have developed a specialized model designed specifically to evaluate the difficulty of each question. We train this model using a dataset compiled from the evaluation results of various LLMs, similar to those used in training our discrimination estimation model. The difficulty of each sample is determined based on these models' evaluation scores. This method provides a more standardized and efficient means of measuring difficulty, avoiding the biases and limitations of manual annotation and annotation by general-purpose models. <!-- formula-not-decoded --> Where N is the number of evaluation models, M is the total number of evaluators, and score ij is the j-th evaluator's score for the i-th evaluation model's answer. We map the difficulty scores to three difficulty levels: "easy" for scores less than or equal to 1.5, "medium" for scores greater than 1.5 but less than or equal to 2.5, and "hard" for scores greater than 2.5. The difficulty level is applied to evaluate the quality of generated instructions. We believe that the difficulty score can serve as a reference for discriminability. In addition, a high difficulty score for a question does not necessarily mean that it is more discriminative. For example, for a question with a max score of 3, if the evaluation scores are both 0 and 0, according to the formula, its difficulty score is 3, and the discrimination score is 0, meaning that the question is very difficult, and the LLMs cannot answer it correctly, so the question is not discriminative. However, if the evaluation scores are 0 and 3, we can calculate that its difficulty score is 1.5, and the discrimination score is 1, indicating that the question can effectively distinguish the level of LLMs. We propose a difficulty estimation model by fine-tuning BaiChuan2-13B pretrained model. The training sample input is the same with the discrimination estimation model. The output is 1-3, representing the difficulty level, and the training instruction is changed to estimate the difficulty of the problem. The complexity of generalization questions can be predicted via utilizing our difficulty estimation model. With the predicted complexity, we can sift out evaluating data exhibiting a specified degree of difficulty. In order to obtain a more accurate measure of the difficulty of the dataset, we calculate the difficulty scores through manual annotation. After obtaining the annotators' scores for the responses of various models to the questions, we can calculate the difficulty score for each sample using Formula 4. By calculating the average value of the difficulty scores for all samples in the dataset, we obtain the difficulty score for these samples. ## 3 Experiment In this section, we first introduce the experimental setup, including the baselines and the seed data. Then we compare our generalization data with some publicly usable datasets and analyze the results. Subsequently, we assess the usability of our data, as well as the discrimination indexes and difficulty score, and provide relevant analysis. Finally, we describe the performance of our proposed discrimination and difficulty estimation models. ## 3.1 Experiment setting Baselines (1) SELF-INSTRUCT [16]: it generates approximately 82k instances from 175 humancreated handwritten instructions. - (2) Instruction Tuning with GPT-4 Dataset [8]: in this task, GPT-4 is used to generate responses to the 52k English data from Alpaca dataset. The questions are then translated into Chinese using chatgpt, and responses are generated again using GPT-4. - (3) WizardLM [17]: it leverages the ChatGPT API to generate 250k instructions based on the training data from Alpaca Dataset. Seed Data We establish a dataset comprising 6,000 instances by employing human annotators, which consists of Chinese and English subsets. The Chinese subset[11] is composed of approximately 5,000 instances, while the English subset contains 1,000 instances. The English instances include 175 sourced from the SELF-INSTRUCT dataset [16] and the remainder from the Alpaca dataset [18]. These questions are categorized into general text questions and mathematical questions, which are generalized separately. Furthermore, the seed data typically exhibit a high degree of diversity, while the categories of generalized data generally remain unchanged. ## 3.2 Comparison to Public Datasets Discrimination indexes and difficulty score analysis Including the first three baselines that have already been introduced, we have also incorporated other datasets: - (1) SELF-INSTRUCT\_seed\_data: 175 seed data used to generate the SELF-INSTRUCT dataset. - (2)SELF-INSTRUCT-Ours: the dataset created by generalizing the 175 seed data points from the SELF-INSTRUCT dataset using our proposed method. - (3) Ours (hard seed data): the data obtained by applying our method to questions that human experts consider to be more challenging. We sample responses from GLM-4, GPT-4 Turbo, GPT-4, Claude 3, and Qwen. We ask 104 domain experts to score the responses from each model according to the criteria outlined in Table 1 and calculate the discrimination and difficulty. By averaging these values, we obtain the overall discrimination indexes and difficulty scores for each dataset. The results are presented in Table 2. Table 2: Comparison of Discrimination Indexes and Difficulty Score on Public Datasets. | Dataset | Discrimination Indexes | Difficulty Score | |-------------------------------|--------------------------|--------------------| | WizardLM | 0.14 | 1.235 | | Instruction Tuning with GPT-4 | 0.098 | 1.215 | | SELF-INSTRUCT_seed_data | 0.061 | 1.146 | | SELF-INSTRUCT | 0.109 | 1.319 | | SELF-INSTRUCT-Ours | 0.137 | 1.541 | | Ours (hard seed data) | 0.204 | 1.941 | From Table 2, among the public datasets for generalization tasks, the WizardLM dataset stands out with a discrimination indexes of 0.140. It is slightly outpaced by the SELF-INSTRUCT dataset, which has a discrimination indexes of 0.109. SELF-INSTRUCT dataset also leads in difficulty score of 1.319. Generalization data with the same 175 seed data, using our method, achieving a higher distinctiveness of 0.137, close to the WizardLM dataset, and the highest difficulty score of 1.541 among its variants. Applying our method to more complex seed data yields even better results, with top scores of 0.204 in discrimination indexes and 1.941 in difficulty scores. These findings highlight that our method not only improves discrimination indexes and difficulty scores but also benefits significantly from the use of challenging seed data, emphasizing the seed data's quality as a crucial factor for generating superior generalized datasets. Performance across LLMs We convert the expert scores assigned to each model into a percentagebased scale. We then compute the average scores for each dataset and determine the mean and variance of the scores for each model across the various datasets. The detailed evaluation results are presented in Table 3. Table 3: Evaluation Scores for Various Models on Different Datasets. | Model | GLM-4 | GPT-4 Turbo | GPT-4 | Claude3 | Qwen | Mean | Var. | |--------------------------------|---------|---------------|---------|-----------|--------|--------|--------| | WizardLM | 69.85 | 72.06 | 66.91 | 68.01 | 68.75 | 69.12 | 3.08 | | Instruction Tunning with GPT-4 | 69.89 | 69.25 | 67.58 | 71.29 | 70.14 | 69.63 | 1.49 | | SELF-INSTRUCT_seed_data | 71.86 | 72.01 | 70.06 | 71.71 | 71.11 | 71.35 | 0.51 | | SELF-INSTRUCT | 67.73 | 69.48 | 66.86 | 63.95 | 67.15 | 67.03 | 3.2 | | SELF-INSTRUCT-Ours | 70.51 | 74.29 | 68.7 | 66.87 | 67.48 | 69.57 | 7.12 | | Ours (hard seed data) | 51.75 | 56.73 | 47.51 | 53.75 | 49.85 | 51.92 | 10.06 | In Table 3, "Var." refers to "Variance". We can draw the following conclusions from table mentioned above. Firstly, The datasets of WizardLM, Instruction Tuning with GPT-4, and SELF-INSTRUCT exhibit improvements in both mean scores and variances across the five models compared to their initial seed data. Notably, the SELF-INSTRUCT dataset has the lowest mean score and the highest variance, suggesting that it can effectively differentiate the performance of various models to a certain extent. Secondly, the generalization data based on SELF-INSTRUCT\_ seed\_data using our method (SELF-INSTRUCT-Ours) has a lower average score than the seed data, implying that our method may increase the difficulty of the questions. In addition, its variance of 7.12 is higher than that of other datasets generalized from the same seed data, reinforcing the notion that our method can enhance the distinctiveness of the data. Lastly, the dataset generated by our method using more challenging seed questions has the lowest average score of 51.92 and the highest variance of 10.06 among all datasets. This highlights the difficulty and distinctive nature of the questions, underscoring the importance of the seed data. Our analysis also reveals that the choice of seed data plays a crucial role in differentiating the performance of various models. ## 3.3 Analysis on the generalization questions To evaluate the effectiveness of our framework's generalization, we collect 192 general text questions and 385 mathematical questions as seed data, and conduct generalization within our framework. For both the seed data and the generalization data, we generate responses from GPT-4, Wenxin 4, and Qwen. Subsequently, we hire 43 experts to assess the usability of the questions and score the responses according to Table 1. Based on these scores, we calculate the discrimination indexes and difficulty scores for both seed seed and generalization questions. The results are shown in Table 4. Table 4: Evaluation Scores for Seed Data and Generalization Questions. | Data | General Text Question | General Text Question | General Text Question | Mathematical Question | Mathematical Question | Mathematical Question | |-------------------------|-------------------------|-------------------------|-------------------------|-------------------------|-------------------------|-------------------------| | | Usa. | Dis. | Dif. | Usa. | Dis. | Dif. | | Seed Data | - | 0.08 | 0.52 | - | 0.09 | 1.21 | | Generalization Question | 94.0% | 0.17 | 1.08 | 96.4% | 0.20 | 1.58 | The data in Table 4 are all obtained from manual annotation, where "Usa." stands for "Usability", "Dis." represents "Discrimination Indexes", and "Dif." denotes "Difficulty Score". From the table, we can draw the following conclusions: Firstly, the generalization questions have a high usability rate, which proves the effectiveness of our method for identifying or correcting the reasonableness of questions. Secondly, by comparing the values of the generalization questions with seed data, our method can enhance the discrimination indexes and difficulty score of the questions to some extent. ## 3.4 Discrimination and Difficulty Estimation Models Performance Evaluation Accuracy of Discrimination Estimation Model We utilize 1500 evaluation data to validate the agreement between the discrimination estimation model predictions and human evaluations. The agreement is 0.72. Comparison of Difficulty Estimation Model with Human Evaluation We select 1,500 humanevaluated questions and let both humans and models predict their difficulty levels respectively. Then, based on the scores from the evaluations, we calculate the difficulty of each question as the gold label according to difficulty formula. Surprisingly, the model's predictions get a consistency rate of 0.70 with the gold label, while the human predictions have a consistency rate of only 0.52. This result indicates that the model may find problems that humans consider difficult or hard-to-understand to be simple. ## 4 Related Work ## 4.1 Instruction Data Generation Instruction data generation from LLM aims to minimize the expenses of human-written instruction and enhance the quality of the data. With the growing capabilities of LLMs, they are now also capable of generating and evaluating datasets. Pioneer works include [16], [8], [18], which generate instruction data with LLM achieve remarkable success. WizardLM[17] introduces Evol-Instruct, which begins with a basic set of data and expands it into more comprehensive and complex instructions. The specific approach incorporates both in-depth evolving (applicable to complex instructions) and in-breadth evolving (aiming to increase topic coverage and diversity). Ultimately, unqualified data is filtered out using the Evolutionary Elimination rules. Subsequently, the Wizard series of works [19] [20] that utilize Evol-Instruct have emerged, further refining the system to form a more comprehensive and robust framework. Self-Alignment[21] proposes an iterative self-training algorithm that utilizes a large amount of unlabeled data to create high-quality instruction datasets. ## 4.2 Data Quality LIMA (Less is more for alignment)[22] is primarily debunking the myth of RLHF by demonstrating that, given a really good dataset, it is possible to train a small supervised model that can perform almost as same as GPT-3 or in fact better than Google's BARD and in some cases like GPT-4 equivalent. Finding high-quality data without resorting to human curation remains a significant challenge. Utilizing the super LLM to assess the validity of data and evaluate its quality is also one of the prevalent methods. The design of Self-Alignment[21] involves a scoring standard on a 5-point scale with the help of LLM to assess the quality of generated instructions and responses, focusing on aspects such as relevance, completeness, usefulness, and the accuracy of the responses to the questions. Furthermore, some studies have attempted to directly extract metrics from existing data to reflect the quality of the data, such as Information Fidelity (IFD) [23]. This approach aims to quantify the richness and accuracy of information in the dataset, thereby providing an intuitive measure of data quality. However, the calculation of metrics like IFD often relies on additional large language models, which to some extent increases the complexity and computational cost of the method. Despite this, these metrics offer an automated means of data quality assessment that does not depend on manual annotation, which is of significant value for rapid evaluation of large-scale datasets. ## 4.3 LLMEvaluation Due to the high convenience in both data collection and automatic evaluation, many evaluation benchmarks have emerged. AGIEval [24] collects official, public, and high-standard admission and qualification exam questions to the human-level capabilities of LLMs. C-Eval [25] is a comprehensive Chinese evaluation suite and contains 13,948 multi-choice questions, including middle school, high school, college, and professional. However, they have overlooked the discrimination indexes of the evaluation questions. ## 5 Conclusion In our research, we emphasize the importance of data discrimination and difficulty and introduce a new framework for instruction generalization. Experimental results prove that this framework effectively enhances the discrimination and difficulty of instructions, generating data that more effectively distinguish the capabilities of different models. We release a batch of generalization data to help the community evaluate models more effectively, thus promoting the enhancement of model capabilities. Additionally, we provide models for identifying discrimination and difficulty to help quickly judge the quality of data. Limitations The effectiveness of our framework relies on the performance of large models, and we hope to see the advent of even more powerful large models in the future. Our method does not directly yield accurate reference answers for mathematical problems that require strong logical reasoning, and the accuracy of these answers requires improvement. Broader Impact The data generalized by our framework effectively differentiates the performance of current mainstream models, offering a research direction for the effective improvement of model capabilities. We also note that the quality of seed data affects the discriminability and difficulty of the data after generalization. We look forward to the arrival of high-performance models and high-quality data in the future, creating a complementary trend. ## 6 Acknowledgement Our work was supported by Tencent, Shenzhen, China, and Southeast University, Nanjing, China. We thank Zishan Xu, Zhichao Hu, Xiao Xiao, and Yuhong Liu of Tencent for their assistance with our work. ## References | [1] | Yuntao Bai, Andy Jones, Kamal Ndousse, Amanda Askell, Anna Chen, Nova DasSarma, Dawn Drain, Stanislav Fort, Deep Ganguli, Tom Henighan, et al. Training a helpful and harmless assistant with reinforcement learning from human feedback. arXiv preprint arXiv:2204.05862 , 2022. | |-------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | [2] | Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timo- thée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, et al. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971 , 2023. | | [3] | Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288 , 2023. | | [4] | Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, et al. A survey on evaluation of large language models. ACM Transactions on Intelligent Systems and Technology , 15(3):1-45, 2024. | | [5] | Wayne Xin Zhao, Kun Zhou, Junyi Li, Tianyi Tang, Xiaolei Wang, Yupeng Hou, Yingqian Min, Beichen Zhang, Junjie Zhang, Zican Dong, et al. A survey of large language models. arXiv preprint arXiv:2303.18223 , 2023. | | [6] | Percy Liang, Rishi Bommasani, Tony Lee, Dimitris Tsipras, Dilara Soylu, Michihiro Yasunaga, Yian Zhang, Deepak Narayanan, Yuhuai Wu, Ananya Kumar, et al. Holistic evaluation of language models. arXiv preprint arXiv:2211.09110 , 2022. | | [7] | Ahmad Ghazal, Tilmann Rabl, Minqing Hu, Francois Raab, Meikel Poess, Alain Crolotte, and Hans-Arno Jacobsen. Bigbench: Towards an industry standard benchmark for big data analytics. In Proceedings of the 2013 ACM SIGMOD international conference on Management of data , pages 1197-1208, 2013. | | [8] | Baolin Peng, Chunyuan Li, Pengcheng He, Michel Galley, and Jianfeng Gao. Instruction tuning with gpt-4, 2023. | | [9] | C Boopathiraj and K Chellamani. Analysis of test items on difficulty level and discrimina- tion index in the test for research in education. International journal of social science & interdisciplinary research , 2(2):189-193, 2013. | | [10] | Reid Pryzant, Dan Iter, Jerry Li, Yin Tat Lee, Chenguang Zhu, and Michael Zeng. Automatic prompt optimization with "gradient descent" and beam search, 2023. | | [11] | Shuyi Xie, Wenlin Yao, Yong Dai, Shaobo Wang, Donlin Zhou, Lifeng Jin, Xinhua Feng, Pengzhi Wei, Yujie Lin, Zhichao Hu, Dong Yu, Zhengyou Zhang, Jing Nie, and Yuhong Liu. Tencentllmeval: A hierarchical evaluation of real-world capabilities for human-aligned llms, 2023. | | [12] | Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in neural information processing systems , 35:24824-24837, 2022. | | [13] | Yilun Liu, Shimin Tao, Xiaofeng Zhao, Ming Zhu, Wenbing Ma, Junhao Zhu, Chang Su, Yutai Hou, Miao Zhang, Min Zhang, et al. Automatic instruction optimization for open-source llm instruction tuning. arXiv preprint arXiv:2311.13246 , 2023. | | [14] | Amartya Sen. Collective choice and social welfare: Expanded edition . Penguin UK, 2017. | | [15] | Truman Lee Kelley. The selection of upper and lower groups for the validation of test items. Journal of Educational Psychology , 30:17-24, 1939. | | [16] | Yizhong Wang, Yeganeh Kordi, Swaroop Mishra, Alisa Liu, Noah A. Smith, Daniel Khashabi, and Hannaneh Hajishirzi. Self-instruct: Aligning language models with self-generated instruc- | | [17] | Can Xu, Qingfeng Sun, Kai Zheng, Xiubo Geng, Pu Zhao, Jiazhan Feng, Chongyang Tao, and Daxin Jiang. Wizardlm: Empowering large language models to follow complex instructions, 2023. | |--------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | [18] | Rohan Taori*, Ishaan Gulrajani*, Tianyi Zhang*, Yann Dubois*, Xuechen Li*, Carlos Guestrin, Percy Liang, and Tatsunori B. Hashimoto. Alpaca: A strong, replicable instruction-following model. Website, 2023. https://crfm.stanford.edu/2023/03/13/alpaca.html . | | [19] | Ziyang Luo, Can Xu, Pu Zhao, Qingfeng Sun, Xiubo Geng, Wenxiang Hu, Chongyang Tao, Jing Ma, Qingwei Lin, and Daxin Jiang. Wizardcoder: Empowering code large language models with evol-instruct. arXiv preprint arXiv:2306.08568 , 2023. | | [20] | Haipeng Luo, Qingfeng Sun, Can Xu, Pu Zhao, Jianguang Lou, Chongyang Tao, Xiubo Geng, Qingwei Lin, Shifeng Chen, and Dongmei Zhang. Wizardmath: Empowering mathematical rea- soning for large language models via reinforced evol-instruct. arXiv preprint arXiv:2308.09583 , 2023. | | [21] | Xian Li, Ping Yu, Chunting Zhou, Timo Schick, Luke Zettlemoyer, Omer Levy, Jason Weston, and Mike Lewis. Self-alignment with instruction backtranslation, 2023. | | [22] | Chunting Zhou, Pengfei Liu, Puxin Xu, Srini Iyer, Jiao Sun, Yuning Mao, Xuezhe Ma, Avia Efrat, Ping Yu, Lili Yu, Susan Zhang, Gargi Ghosh, Mike Lewis, Luke Zettlemoyer, and Omer Levy. Lima: Less is more for alignment, 2023. | | [23] | Ming Li, Yong Zhang, Zhitao Li, Jiuhai Chen, Lichang Chen, Ning Cheng, Jianzong Wang, Tianyi Zhou, and Jing Xiao. From quantity to quality: Boosting llm performance with self- guided data selection for instruction tuning. arXiv preprint arXiv:2308.12032 , 2023. | | [24] | Wanjun Zhong, Ruixiang Cui, Yiduo Guo, Yaobo Liang, Shuai Lu, Yanlin Wang, Amin Saied, Weizhu Chen, and Nan Duan. Agieval: A human-centric benchmark for evaluating foundation models. arXiv preprint arXiv:2304.06364 , 2023. | | [25] | Yuzhen Huang, Yuzhuo Bai, Zhihao Zhu, Junlei Zhang, Jinghan Zhang, Tangjun Su, Junteng Liu, Chuancheng Lv, Yikai Zhang, Jiayi Lei, et al. C-eval: A multi-level multi-discipline chinese evaluation suite for foundation models. arXiv preprint arXiv:2305.08322 , 2023. | ## A Appendix / Supplemental Material ## A.1 Additional Details on the Method Generalization Methods for Different Categories We believe that for evaluation data, discrimination and difficulty are important measures of data quality. Inspired by traditional gradient ideas, we hope to find a suitable "gradient" as a generalization method in existing instruction generation to improve the discrimination and difficulty of data. Considering that for different types of data, there should be different suitable generalization methods. Therefore, we have designed different generalization methods for different categories. We have carefully designed some generalization schemes that can improve the difficulty and discrimination of the problem. The list of schemes is presented in Table 5: Table 5: Generalization Methods for Different Categories | Category | Generalization Method | |-----------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | General Text Question | 1. Increase the requirements for creativity and novelty 2. Replace general concepts with specific ones 3. Raise the level of abstraction, abstracting problems from concrete instances 4. Integrate knowledge across domains 5. Restrict the language used in responses 6. Design forbidden specific vocabulary, constrain vocabulary usage fre- quency, require the use of specific vocabulary 7. Limit the number of sentences, word count, special formatting, or the number of paragraphs 8. Impose constraints on punctuation marks, such as using or not using specific punctuation symbols 9. Limit the number of placeholders, and choose whether to add postscript or not 10. Restrict the starting or ending words 11. Require highlighting, JSON formatting, or partial quantities 12. Employ multiple constraint methods from the above list | | Mathematics | 1. Change variables 2. Provide programming code 3. Introduce dynamic processes 4. Introduce additional variables 5. Limit methods 6. Combine with non-mathematical domain knowledge 7. Introduce advanced mathematical concepts 8. Combine different mathematical domains | Information Inducer To generate more enriched responses from the LLM for subsequent questions, we incorporate a simple instruction into the questions, which we name the "Information Inducer". Prompt of Generating Questions Based on Response For general text questions, the prompt for generating questions based on responses is shown in Table 7. We provide responses from large models and request the design of new questions, thereby generating a more diverse set of questions. Table 6: Information Inducer for General Text Question | Category | Instruction | |-----------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | General Text Question | Please describe the background and relevant details of this problem in detail. Think deeply about the problem from multiple dimensions. Based on this information, provide a comprehensive and in-depth answer or suggestion, and explain the thought process. | The method\_list in the prompt refers to the generalization strategies listed in Table 5, which can serve as a reference during the question design process. Table 7: Prompt of Generating Questions Based on Response You are an experienced educational master with rich expertise. Please combine your expertise to play the role of an "examiner". The candidates are existing AI systems (such as ChatGPT, Qwen, GPT4, etc.). Your task is to design a question based on the given information. Information: {response} ## Question requirements: Please first consider the important criteria within the field of education and use them as a reference for designing the question. The designed question should be logically clear and complete in content. The designed question should be challenging for the candidates. The designed question should have discrimination as much as possible, so that some candidates can answer correctly, while others cannot. The question should be novel and not limited to the given information. Reference question ideas: To increase the discrimination and difficulty of the question, you can refer to the following question ideas: {method\_list} ## Output format requirements: Line 1: Require a point-by-point discussion, give the thinking points that can be mined from the given information, and the "thinking process" for improving the difficulty and discrimination. Line 2: Based on the above thinking, output the designed question. Please output your design: General Text Question Usability Evaluation Criteria For general text questions, we consider the usability of the problems from the perspectives of safety, neutrality, integrity, and feasibility, as shown in Table 8 and further detailed in the instruction scoring prompt presented in Table 9. Table 8: General Text Question Usability Evaluation Criteria | Dimension | Description | |-----------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Safety Neutrality Integrity Feasibility | No explicit, politically sensitive, or violent content No bias or racial discrimination in instructions Sufficient information provided to clarify the task Instructions within the AI system's capability range | You are an instruction scorer, responsible for scoring the following instruction: {instruction} ## Evaluation Criteria: Safety (1 point): The instruction does not contain any inappropriate, politically sensitive, violent, or illegal content. Neutrality (1 point): The instruction is unbiased and does not involve racial discrimination. Information completeness (1 point): The instruction provides enough information to clearly execute the task, including all necessary context and specific details, without omitting key information or requiring additional assumptions or inferences. Feasibility (1 point): The instruction does not exceed the capabilities of the AI system. ## Additional Notes: Based on the above criteria, score the instruction. Ensure that the instruction contains all necessary information. For example, if the instruction requires rewriting a sentence, analyzing text, or providing an explanation, it must provide the original sentence to be rewritten, the text to be analyzed, or the specific content to be explained. Instructions lacking this information can be penalized in "information completeness" and "feasibility." ## Output Format: Safety: Neutrality: Information completeness: Feasibility: Total score: Each point should be 0/1, with a total score output of 0-4, without any additional content. CoT Check of Usability for Mathematical Questions For mathematical questions, we design a Chain of Thought (CoT) approach to check the usability of the problems. Starting from the concepts, we delve into each component of the problem, evaluate the logical relationships and solvability, and carefully examine the assumptions and calculation results in the problem to ensure the reasonableness and accuracy of the mathematical questions. As shown in Table 10. Case Study for General Text Questions For general text questions, we provide an additional example to further illustrate the generalization process, as shown in Figure 3. Analysis of Effectiveness Usability: Human-annotated datasets are not necessarily all usable, and they often contain errors. They also need to be repeatedly checked and reviewed to ensure a high level of usability (e.g., above 95%). The usability of the questions in our generated data can reach 94% (based on human-annotated results), and the usability of the evaluation data is satisfactory. In contrast, the usability of Self-Instruct[16] is 79%. Production Efficiency: In this paper, it takes 2-5 calls to check a machine-generated question, with an average time of about 20 seconds per question. In contrast, manual writing takes about 5 minutes per question, and it is subject to fatigue effects. Cost: In this paper, generating and checking a question with the machine involves the input and output of about 9k tokens, costing approximately $0.03. In contrast, the market price for manually writing a usable question is about $2, making the cost of human-annotated datasets relatively high. | Step 1: | Analyze each component of the problem in detail, identify and understand the relevant concepts involved in the problem, and check whether they are defined in mathematics and used appropriately. | |-----------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Step 2: | Think deeply about the logical relationships between each component. Evaluate whether the relationships in the problem are mathematically reasonable. If possible, provide supporting mathematical proofs or identify potential contradictions. | | Step 3: | Fully assess the solvability of the problem. Determine whether the problem can be solved and whether there is sufficient information or conditions to solve it. If the problem cannot be solved, point out the missing information or conditions and explain why these are necessary. | | Step 4: | Carefully check to determine whether there are any counter-intuitive or unreasonable assumptions in the problem or steps. Check whether the numbers in the problem and the results of the calculations are consistent with the actual situation, such as whether the relevant results of people/objects are integers, whether there are any violations of odd and even cognition in the problem or process, etc. | <!-- image --> Figure 3: Example of generalization for general text questions. First, we commence with the seed data comprising general text questions and choose 1 to 3 techniques from the method library to furnish specific generalization recommendations for the seed data. In this example, the seed data is mandated to be generalized by "incorporating other philosophical viewpoints", "Add keyword constraints" and "restricting the answer length". Through these methodologies, the generalization question becomes more challenging. Subsequently, we assess the generalization question for safety, neutrality, integrity and feasibility to ascertain their usablity. We retain the qualified instructions and discard unqualified questions. If the generalization questions are qualified, we can employ LLM to generate responses for them and restructure questions based on these responses. In our example, the generalization question that emerges from the LLM's response incorporates philosophical concepts like the "will to power" and the "aspiration to become the Übermensch". The rephrased question introduces a novel perspective, largely contingent on the language model's reply, thereby enriching the diversity of viewpoints in the question set through the applied generalization technique. ## A.2 Supplementary Experiment Ablation Study of Multiple LLMs for generation data We apply our proposed method to some other LLMs, such as GPT-4-turbo (gpt-4-turbo-2024-04-09) and Qwen (Qwen-max), using the same batch of a small amount of seed data, and manually scoring the models' responses to calculate discrimination indexes and map them to the four levels of discrimination indexes. The experimental results are shown in the table below. The results show that there are differences in the effects of these models, and using more powerful models may generate higher quality data. This also confirms the limitation mentioned in the conclusion section of our paper: our framework relies on the performance of large models. | Model | Amount | Low | Relatively Low | Relatively High | High | |------------|----------|-------|------------------|-------------------|--------| | Seed_data | 50 | 45 | 0 | 4 | 1 | | Hunyuan | 50 | 29 | 8 | 8 | 5 | | Qwen | 50 | 28 | 13 | 6 | 3 | | Gpt4-turbo | 50 | 21 | 5 | 10 | 14 | Table 11: Comparison of different models based on performance metrics. Ablation Study of Multi Models for CoT check In the proposed framework, the idea of 'one problem, multiple evaluations' is operationalized by aggregating outcomes from several models. Specifically, we utilize both Hunyuan-standard and Hunyuan-pro to adjudicate the reasonableness of generalization questions. These models apply our Chain of Thought (CoT) method to systematically assess the validity of each question. If either model identifies a question as lacking in reasonableness, that model will initiate a corrective iteration based on its CoT reasoning process. In the event that both models concur on the unreasonableness of a question, the correction process will be guided by the CoT reasoning mechanism employed by Hunyuan-standard. The question will then undergo a subsequent evaluation of its reasonableness. This iterative process is capped at two cycles. Questions that continue to be classified as unreasonable after two iterations are subsequently removed from the question pool. To further investigate the mathematical question usability recognition using single and multiple models, we conduct an ablation study on the generalization data that has an expert-judged usability rate of 64.8%. In this study, we separately count the usability of data after filtering by Hunyuanstandard and Hunyuan-pro, as well as the usability of data under their combined filtering. The results are presented in Table 12. Table 12: Usability of Data after Filtering by Different Models | Judgment Model | Generalization Data Usability | Correction Data Usbility | |--------------------------------|---------------------------------|----------------------------| | Hunyuan-standard | 87.0% | 93.3% | | Hunyuan-pro | 84.6% | 90.3% | | Hunyuan-standard + Hunyuan-pro | 90.0% | 96.4% | In Table 12, "Generalization Data Usability" refers to the usability rate of the data generated by applying our generalization method to a set of seed data, accompanied by the exclusion of any questions considered unreasonable by our proposed Chain of Thought (CoT) method. The "Correction Data Usability" section details the process where the model attempts to correct questions identified as unreasonable by the proposed CoT in generalization data, while leaving the reasonable questions unchanged. The resulting usability rate of the data is then gained. As indicated in Table 12, employing a single model, either Hunyuan-standard or Hunyuan-pro, utilizing the proposed Chain of Thought (CoT) approach to assess question usability yields commendable results. After the rectification of questions and subsequent removal of data still considered unsuitable, the usability rate reaches a threshold of 90%. Furthermore, when both models are deployed in tandem to evaluate question usability and filter out inadmissible questions, there is an observable enhancement in the usability rate in both scenarios. ## NeurIPS Paper Checklist ## 1. Claims Question: Do the main claims made in the abstract and introduction accurately reflect the paper's contributions and scope? Answer: [Yes] Justification: The main claims presented in the abstract and introduction accurately reflect the paper's contributions and scope, providing a clear and concise overview of the novel findings, innovations, and the research topic covered in the paper. ## Guidelines: - · The answer NA means that the abstract and introduction do not include the claims made in the paper. - · The abstract and/or introduction should clearly state the claims made, including the contributions made in the paper and important assumptions and limitations. A No or NA answer to this question will not be perceived well by the reviewers. - · The claims made should match theoretical and experimental results, and reflect how much the results can be expected to generalize to other settings. - · It is fine to include aspirational goals as motivation as long as it is clear that these goals are not attained by the paper. ## 2. Limitations Question: Does the paper discuss the limitations of the work performed by the authors? Answer: [Yes] Justification: In the Conclusion section, we discuss the limitations of our work, addressing considerations of robustness with respect to potential assumption violations and providing insights into the computational efficiency of our approach as it scales with dataset size. ## Guidelines: - · The answer NA means that the paper has no limitation while the answer No means that the paper has limitations, but those are not discussed in the paper. - · The authors are encouraged to create a separate "Limitations" section in their paper. - · The paper should point out any strong assumptions and how robust the results are to violations of these assumptions (e.g., independence assumptions, noiseless settings, model well-specification, asymptotic approximations only holding locally). The authors should reflect on how these assumptions might be violated in practice and what the implications would be. - · The authors should reflect on the scope of the claims made, e.g., if the approach was only tested on a few datasets or with a few runs. In general, empirical results often depend on implicit assumptions, which should be articulated. - · The authors should reflect on the factors that influence the performance of the approach. For example, a facial recognition algorithm may perform poorly when image resolution is low or images are taken in low lighting. Or a speech-to-text system might not be used reliably to provide closed captions for online lectures because it fails to handle technical jargon. - · The authors should discuss the computational efficiency of the proposed algorithms and how they scale with dataset size. - · If applicable, the authors should discuss possible limitations of their approach to address problems of privacy and fairness. - · While the authors might fear that complete honesty about limitations might be used by reviewers as grounds for rejection, a worse outcome might be that reviewers discover limitations that aren't acknowledged in the paper. The authors should use their best judgment and recognize that individual actions in favor of transparency play an important role in developing norms that preserve the integrity of the community. Reviewers will be specifically instructed to not penalize honesty concerning limitations. ## 3. Theory Assumptions and Proofs Question: For each theoretical result, does the paper provide the full set of assumptions and a complete (and correct) proof? Answer: [NA] Justification: Our paper does not include theoretical results ## Guidelines: - · The answer NA means that the paper does not include theoretical results. - · All the theorems, formulas, and proofs in the paper should be numbered and crossreferenced. - · All assumptions should be clearly stated or referenced in the statement of any theorems. - · The proofs can either appear in the main paper or the supplemental material, but if they appear in the supplemental material, the authors are encouraged to provide a short proof sketch to provide intuition. - · Inversely, any informal proof provided in the core of the paper should be complemented by formal proofs provided in the appendix or supplemental material. - · Theorems and Lemmas that the proof relies upon should be properly referenced. ## 4. Experimental Result Reproducibility Question: Does the paper fully disclose all the information needed to reproduce the main experimental results of the paper to the extent that it affects the main claims and/or conclusions of the paper (regardless of whether the code and data are provided or not)? Answer: [Yes] Justification: We propose a new framework, provide detailed and specific descriptions of its components, and make the content open-source, enabling other researchers to easily reproduce our experimental results. Furthermore, we open-source 3k of data to promote related research and development. These materials are currently under review, and we will make them publicly available afterward. ## Guidelines: - · The answer NA means that the paper does not include experiments. - · If the paper includes experiments, a No answer to this question will not be perceived well by the reviewers: Making the paper reproducible is important, regardless of whether the code and data are provided or not. - · If the contribution is a dataset and/or model, the authors should describe the steps taken to make their results reproducible or verifiable. - · Depending on the contribution, reproducibility can be accomplished in various ways. For example, if the contribution is a novel architecture, describing the architecture fully might suffice, or if the contribution is a specific model and empirical evaluation, it may be necessary to either make it possible for others to replicate the model with the same dataset, or provide access to the model. In general. releasing code and data is often one good way to accomplish this, but reproducibility can also be provided via detailed instructions for how to replicate the results, access to a hosted model (e.g., in the case of a large language model), releasing of a model checkpoint, or other means that are appropriate to the research performed. - · While NeurIPS does not require releasing code, the conference does require all submissions to provide some reasonable avenue for reproducibility, which may depend on the nature of the contribution. For example - (a) If the contribution is primarily a new algorithm, the paper should make it clear how to reproduce that algorithm. - (b) If the contribution is primarily a new model architecture, the paper should describe the architecture clearly and fully. - (c) If the contribution is a new model (e.g., a large language model), then there should either be a way to access this model for reproducing the results or a way to reproduce the model (e.g., with an open-source dataset or instructions for how to construct the dataset). - (d) We recognize that reproducibility may be tricky in some cases, in which case authors are welcome to describe the particular way they provide for reproducibility. In the case of closed-source models, it may be that access to the model is limited in some way (e.g., to registered users), but it should be possible for other researchers to have some path to reproducing or verifying the results. ## 5. Open access to data and code Question: Does the paper provide open access to the data and code, with sufficient instructions to faithfully reproduce the main experimental results, as described in supplemental material? Answer: [Yes] Justification: The data and code are currently under review. We will make them publicly available at a later time. ## Guidelines: - · The answer NA means that paper does not include experiments requiring code. - · Please see the NeurIPS code and data submission guidelines ( https://nips.cc/ public/guides/CodeSubmissionPolicy ) for more details. - · While we encourage the release of code and data, we understand that this might not be possible, so 'No' is an acceptable answer. Papers cannot be rejected simply for not including code, unless this is central to the contribution (e.g., for a new open-source benchmark). - · The instructions should contain the exact command and environment needed to run to reproduce the results. See the NeurIPS code and data submission guidelines ( https: //nips.cc/public/guides/CodeSubmissionPolicy ) for more details. - · The authors should provide instructions on data access and preparation, including how to access the raw data, preprocessed data, intermediate data, and generated data, etc. - · The authors should provide scripts to reproduce all experimental results for the new proposed method and baselines. If only a subset of experiments are reproducible, they should state which ones are omitted from the script and why. - · At submission time, to preserve anonymity, the authors should release anonymized versions (if applicable). - · Providing as much information as possible in supplemental material (appended to the paper) is recommended, but including URLs to data and code is permitted. ## 6. Experimental Setting/Details Question: Does the paper specify all the training and test details (e.g., data splits, hyperparameters, how they were chosen, type of optimizer, etc.) necessary to understand the results? Answer: [Yes] Justification: The experimental section 3 provides a detailed description of the experimental setup and specifics. ## Guidelines: - · The answer NA means that the paper does not include experiments. - · The experimental setting should be presented in the core of the paper to a level of detail that is necessary to appreciate the results and make sense of them. - · The full details can be provided either with the code, in appendix, or as supplemental material. ## 7. Experiment Statistical Significance Question: Does the paper report error bars suitably and correctly defined or other appropriate information about the statistical significance of the experiments? Answer: [No] Justification: Due to our limited computational and human resources, we don't report error bars. Guidelines: - · The answer NA means that the paper does not include experiments. - · The authors should answer "Yes" if the results are accompanied by error bars, confidence intervals, or statistical significance tests, at least for the experiments that support the main claims of the paper. - · The factors of variability that the error bars are capturing should be clearly stated (for example, train/test split, initialization, random drawing of some parameter, or overall run with given experimental conditions). - · The method for calculating the error bars should be explained (closed form formula, call to a library function, bootstrap, etc.) - · The assumptions made should be given (e.g., Normally distributed errors). - · It should be clear whether the error bar is the standard deviation or the standard error of the mean. - · It is OK to report 1-sigma error bars, but one should state it. The authors should preferably report a 2-sigma error bar than state that they have a 96% CI, if the hypothesis of Normality of errors is not verified. - · For asymmetric distributions, the authors should be careful not to show in tables or figures symmetric error bars that would yield results that are out of range (e.g. negative error rates). - · If error bars are reported in tables or plots, The authors should explain in the text how they were calculated and reference the corresponding figures or tables in the text. ## 8. Experiments Compute Resources Question: For each experiment, does the paper provide sufficient information on the computer resources (type of computing workers, memory, time of execution) needed to reproduce the experiments? Answer: [Yes] Justification: In the experiment section, we provide a detailed description of the human resources required for the experiments and the situation of models' API interface calls. ## Guidelines: - · The answer NA means that the paper does not include experiments. - · The paper should indicate the type of compute workers CPU or GPU, internal cluster, or cloud provider, including relevant memory and storage. - · The paper should provide the amount of compute required for each of the individual experimental runs as well as estimate the total compute. - · The paper should disclose whether the full research project required more compute than the experiments reported in the paper (e.g., preliminary or failed experiments that didn't make it into the paper). ## 9. Code Of Ethics Question: Does the research conducted in the paper conform, in every respect, with the NeurIPS Code of Ethics https://neurips.cc/public/EthicsGuidelines ? Answer: [Yes] Justification: The paper ensures the preservation of anonymity. ## Guidelines: - · The answer NA means that the authors have not reviewed the NeurIPS Code of Ethics. - · If the authors answer No, they should explain the special circumstances that require a deviation from the Code of Ethics. - · The authors should make sure to preserve anonymity (e.g., if there is a special consideration due to laws or regulations in their jurisdiction). ## 10. Broader Impacts Question: Does the paper discuss both potential positive societal impacts and negative societal impacts of the work performed? Answer: [Yes] Justification: We discuss the potential positive and negative societal impacts of our work in the conclusion section. ## Guidelines: - · The answer NA means that there is no societal impact of the work performed. - · If the authors answer NA or No, they should explain why their work has no societal impact or why the paper does not address societal impact. - · Examples of negative societal impacts include potential malicious or unintended uses (e.g., disinformation, generating fake profiles, surveillance), fairness considerations (e.g., deployment of technologies that could make decisions that unfairly impact specific groups), privacy considerations, and security considerations. - · The conference expects that many papers will be foundational research and not tied to particular applications, let alone deployments. However, if there is a direct path to any negative applications, the authors should point it out. For example, it is legitimate to point out that an improvement in the quality of generative models could be used to generate deepfakes for disinformation. On the other hand, it is not needed to point out that a generic algorithm for optimizing neural networks could enable people to train models that generate Deepfakes faster. - · The authors should consider possible harms that could arise when the technology is being used as intended and functioning correctly, harms that could arise when the technology is being used as intended but gives incorrect results, and harms following from (intentional or unintentional) misuse of the technology. - · If there are negative societal impacts, the authors could also discuss possible mitigation strategies (e.g., gated release of models, providing defenses in addition to attacks, mechanisms for monitoring misuse, mechanisms to monitor how a system learns from feedback over time, improving the efficiency and accessibility of ML). ## 11. Safeguards Question: Does the paper describe safeguards that have been put in place for responsible release of data or models that have a high risk for misuse (e.g., pretrained language models, image generators, or scraped datasets)? Answer: [Yes] Justification: In the methods section, we consider data safety by implementing filtering processes and manually inspecting the publicly released data to ensure its security and prevent potential misuse. ## Guidelines: - · The answer NA means that the paper poses no such risks. - · Released models that have a high risk for misuse or dual-use should be released with necessary safeguards to allow for controlled use of the model, for example by requiring that users adhere to usage guidelines or restrictions to access the model or implementing safety filters. - · Datasets that have been scraped from the Internet could pose safety risks. The authors should describe how they avoided releasing unsafe images. - · We recognize that providing effective safeguards is challenging, and many papers do not require this, but we encourage authors to take this into account and make the best faith effort. ## 12. Licenses for existing assets Question: Are the creators or original owners of assets (e.g., code, data, models), used in the paper, properly credited and are the license and terms of use explicitly mentioned and properly respected? Answer: [Yes] Justification: We cite the relevant works and provide the appropriate licenses for the released assets properly, ensuring compliance with their terms of use. ## Guidelines: - · The answer NA means that the paper does not use existing assets. - · The authors should cite the original paper that produced the code package or dataset. - · The authors should state which version of the asset is used and, if possible, include a URL. - · The name of the license (e.g., CC-BY 4.0) should be included for each asset. - · For scraped data from a particular source (e.g., website), the copyright and terms of service of that source should be provided. - · If assets are released, the license, copyright information, and terms of use in the package should be provided. For popular datasets, paperswithcode.com/datasets has curated licenses for some datasets. Their licensing guide can help determine the license of a dataset. - · For existing datasets that are re-packaged, both the original license and the license of the derived asset (if it has changed) should be provided. - · If this information is not available online, the authors are encouraged to reach out to the asset's creators. ## 13. New Assets Question: Are new assets introduced in the paper well documented and is the documentation provided alongside the assets? Answer: [Yes] Justification: We provide documentation for the assets released alongside the assets themselves. The content of documentation is still under review and will be made public shortly. ## Guidelines: - · The answer NA means that the paper does not release new assets. - · Researchers should communicate the details of the dataset/code/model as part of their submissions via structured templates. This includes details about training, license, limitations, etc. - · The paper should discuss whether and how consent was obtained from people whose asset is used. - · At submission time, remember to anonymize your assets (if applicable). You can either create an anonymized URL or include an anonymized zip file. ## 14. Crowdsourcing and Research with Human Subjects Question: For crowdsourcing experiments and research with human subjects, does the paper include the full text of instructions given to participants and screenshots, if applicable, as well as details about compensation (if any)? Answer: [Yes] Justification: We employ annotators for our research and provide them with lawful compensation. ## Guidelines: - · The answer NA means that the paper does not involve crowdsourcing nor research with human subjects. - · Including this information in the supplemental material is fine, but if the main contribution of the paper involves human subjects, then as much detail as possible should be included in the main paper. - · According to the NeurIPS Code of Ethics, workers involved in data collection, curation, or other labor should be paid at least the minimum wage in the country of the data collector. ## 15. Institutional Review Board (IRB) Approvals or Equivalent for Research with Human Subjects Question: Does the paper describe potential risks incurred by study participants, whether such risks were disclosed to the subjects, and whether Institutional Review Board (IRB) approvals (or an equivalent approval/review based on the requirements of your country or institution) were obtained? Answer: [Yes] Justification: The paper involves crowdsourcing experiments and provides a clear description of potential risks incurred by study participants, as well as the disclosure of these risks to the subjects. ## Guidelines: - · The answer NA means that the paper does not involve crowdsourcing nor research with human subjects. - · Depending on the country in which research is conducted, IRB approval (or equivalent) may be required for any human subjects research. If you obtained IRB approval, you should clearly state this in the paper. - · We recognize that the procedures for this may vary significantly between institutions and locations, and we expect authors to adhere to the NeurIPS Code of Ethics and the guidelines for their institution. - · For initial submissions, do not include any information that would break anonymity (if applicable), such as the institution conducting the review.
zuwpeRkJNH
Procedure-Aware Surgical Video-language Pretraining with Hierarchical Knowledge Augmentation
Surgical video-language pretraining (VLP) faces unique challenges due to the knowledge domain gap and the scarcity of multi-modal data. This study aims to bridge the gap by addressing issues regarding textual information loss in surgical lecture videos and the spatial-temporal challenges of surgical VLP. To tackle these issues, we propose a hierarchical knowledge augmentation approach and a novel Procedure-Encoded Surgical Knowledge-Augmented Video-Language Pretraining (PeskaVLP) framework. The proposed knowledge augmentation approach uses large language models (LLM) to refine and enrich surgical concepts, thus providing comprehensive language supervision and reducing the risk of overfitting. The PeskaVLP framework combines language supervision with visual self-supervision, constructing hard negative samples and employing a Dynamic Time Warping (DTW) based loss function to effectively comprehend the cross-modal procedural alignment. Extensive experiments on multiple public surgical scene understanding and cross-modal retrieval datasets show that our proposed method significantly improves zero-shot transferring performance and offers a generalist visual repre- sentation for further advancements in surgical scene understanding. The source code will be available at https://github.com/CAMMA-public/PeskaVLP.
https://openreview.net/pdf/b754552d7cad51cf70357809a56df08d88257ab9.pdf
[ { "confidence": 5, "rating": 8, "review_id": "x9lmNImh2H", "review_text": "The paper addresses challenges in surgical video-language pretraining (VLP) due to the knowledge domain gap and scarcity of multi-modal data. It proposes a hierarchical knowledge augmentation approach and the Procedure-Encoded Surgical Knowledge-Augmented Video-Language Pretraining (PeskaVLP) framework. This approach enhances data efficacy and tackles spatial-temporal challenges by combining language supervision with visual self-supervision. Extensive experiments demonstrate significant improvements in zero-shot transferring performance and the generalist visual representation for surgical scene understanding.\n\nThe paper presents a unique approach to surgical video-language pretraining by employing hierarchical knowledge augmentation using LLMs, significantly improving textual data quality and diversity. The PeskaVLP framework innovatively integrates visual and language supervision, addressing the spatial-temporal challenges in surgical scene understanding. The methodology is meticulously validated through extensive zero-shot and linear-probing evaluations on datasets such as Cholec80 and AutoLaparo, demonstrating substantial performance improvements. The clarity of the presentation, with well-organized sections and effective visual aids, facilitates comprehension. The significant contribution lies in enhancing surgical scene understanding and cross-modal retrieval, making it highly valuable for the NeurIPS community. The paper's originality in using hierarchical pretraining and the detailed discussion on model architectures and initialization underscore its quality and significance in advancing surgical data science.\n\nFirstly, the dataset size is relatively small, with 1,007 videos for phase-level pretraining and 920 for video-level pretraining, which may limit the generalizability of the findings (as mentioned in the supplementary material). I know the difficulty in collecting medical data, but we must be sure that the presented approach can be generalized to different domains and hospitals. Furthermore, I doubt the methodology's potential to process \"noisy\" videos. \nExpanding the dataset and including more diverse surgical procedures would improve robustness. \n\nSecondly, while the paper mentions ASR errors in transcriptions, it does not provide a detailed methodology for handling them. Providing specific techniques for improving transcription accuracy would strengthen the study. \n\nAdditionally, the practical implementation of the PeskaVLP framework in real-world surgical contexts is not thoroughly discussed. Detailing strategies for integration into clinical workflows and addressing potential technological barriers would be beneficial.\n\n1. How do you plan to address the limited sample size and diversity in future studies to improve the generalizability of your findings? Consider expanding the dataset to include a more extensive and more diverse sample of surgical procedures to enhance robustness and applicability.\n\n2. What specific methods did you use to handle ASR errors in transcriptions? How did these errors impact your analysis?\n\n3. How do you manage the computational overhead associated with the hierarchical pretraining and dynamic time-warping processes?" }, { "confidence": 5, "rating": 8, "review_id": "fLzJ6lMID0", "review_text": "The paper presents a novel approach for enhancing surgical video analysis by incorporating procedural awareness. The authors propose a system that integrates knowledge of surgical procedures to improve the identification, segmentation, and annotation of surgical activities in video footage. This approach aims to address challenges such as the variability of surgical techniques and the complexity of visual data in operating rooms. The contributions of the paper include the development of a procedural model that can be aligned with video data, the creation of annotated datasets for training and evaluation, and the demonstration of improved performance over traditional video analysis methods.\n\n1.The integration of procedural knowledge into surgical video analysis is a highly original concept. This approach not only enhances the accuracy of video analysis but also opens new avenues for improving surgical training and documentation.\n\n2.Introduces a novel hierarchical knowledge augmentation technique using large language models to refine surgical concepts. Employs a Dynamic Time Warping-based loss function for effective cross-modal procedural alignment. Demonstrates significant improvements in zero-shot transfer performance across multiple surgical datasets. Provides a robust general visual representation beneficial for various surgical scene understanding tasks.\nWeaknesses:\n\n3.The potential applications of this research in surgical training, intraoperative assistance, and postoperative review are significant. The approach addresses a critical need in medical video analysis, making it highly relevant and impactful.\n\nDataset Limitations: The annotated datasets used for training and evaluation are crucial for the model's success. Expanding the diversity and volume of these datasets would enhance the generalizability of the findings.\n\nGeneralizability: How does the system perform across different types of surgeries (like ophthalmic surgery)? Have you tested its effectiveness in various surgical domains beyond the initial scope?" }, { "confidence": 4, "rating": 6, "review_id": "7Q5nQkdlIh", "review_text": "This paper proposes a Procedure-Encoded Surgical Knowledge-Augmented Video-Language Pretraining (PeskaVLP) method that enriches language supervision with LLM-refined surgical concepts. It further constructs hard negative samples by reversing the text orders at the phase and video levels and employs a Dynamic Time Warping (DTW) based loss to align multimodal procedures. Extensive experiments on multiple surgical procedures and comprehensive evaluations demonstrate the effectiveness of this framework.\n\n- The paper is overall well-written, with the background and motivation well-stated.\n- Using LLM to augment surgical video text descriptions is a good idea to enhance the quality of surgical text narration. It establishes a good baseline and guideline for future works that aim to apply LLM in surgical narratives.\n- A more comprehensive parent-child level cross-modal correspondence was designed using DTW than existing works.\n- Demonstration of the proposed method can close the representation gap for different modality, and analysed both successful and complicated examples.\n\n- By reading the enriched dataset by LLM in Appendix H, I am concerning that the variation and diversity of narration will be removed by the augmentation. Will that cause any problems?\n- In my opinion, using LLM to refine the text description of surgical videos is the most important contribution of this paper. It would be interesting to see if other components are also effective enough without the knowledge augmentation.\n\n- Beyond the current ablation study on PeskaVLP components, would applying the hierarchical knowledge-augmented text data in HecVL improve its performance and if this could yield results competitive with PeskaVLP. This would provide powerful support to verify the extent to which the other components in PeskaVLP contribute to performance, apart from the augmented texts.\n- Although LLM can enhance surgical text quality, is there a concern that the text may become overly standardized? Given that surgeons' narratives in the operating room tend to be more oral, concise, and sometimes include jargon, will there be a performance degradation in real-world, real-time applications where LLM augmentation is impractical?\n- In Appendix E, Figure 4, it would also be interesting if the authors could visualize the embeddings of HecVL, since it performs better than SurgVLP.\n- In Table 3, on Cholec80, Moco pre-trained on Cholec80 (V) has better performance but wasn't in bold, do I misinterpret something?" }, { "confidence": 4, "rating": 5, "review_id": "y1b7xOz8Eh", "review_text": "The paper presents a new framework called PeskaVLP for surgical video-language pretraining. A hierarchical knowledge augmentation approach is used for enriching text information. The pretraining is implemented with the proposed language supervision and visual self-supervision. A new training objective is proposed for surgical procedural understanding. Extensive experiments are conducted to demonstrate the effectiveness on the surgical phase recognition task and cross-modal retrieval task on multiple downstream dataset.\n\n1. This paper addresses the problem of VLP in the surgical scene. A hierarchical knowledge augmentation is proposed to tackle the problem of lack of textual information in the surgical field.\n2. The paper is generally well-written and easy to follow.\n\n1. The explanation of method details is not clear enough, and there is a lack of discussion on some experimental results\n2. The proposed method is based on certain assumptions but lacks a comprehensive consideration of applicability.\n\n1. What types of surgeries are included in the SVL dataset used in the paper? Is it suitable for the pretraining task? Could it affect the results on the downstream dataset?\n\n2. In Section 3.2, where hierarchical knowledge is augmented by GPT, the authors need to discuss the ability of LLMs to generate accurate textual information to describe the surgical steps in the domain-specific surgical context, especially considering the fine-grained image-text alignment in the clip-level (only 4 frames).\n\n3. In Section 3.2, the authors calculate textual similarity between the pseudo step generated by the LLM and the narration. How is this similarity calculated? Is there an ablation study on the effectiveness of the three behavior in knowledge augmentation?\n\n4. In Section 3.3.1, the authors implement visual self-supervision based on augmentation. Which specific augmentations were used? Do the augmentations affect the corresponding text's semantic information? For example, using flipping could impact descriptions related to left/right information in surgical operation.\n\n5. In Section 3.3.2, procedural information based on surgical phases is used. However, in surgical datasets, such as the cholec80 and AutoLaparo mentioned in the paper, the surgical process does not always follow a linear order defined by Phase 1-N and may include repeated phases. The authors should discuss the applicability of the method design in such situations.\n\n6. In Table 3, for the experimental results on cholec80, Moco (third row) provides the best results, but this is not highlighted in bold in the table. This needs to be corrected and the corresponding discussion should be provided. The same issue appears with the results using Moco (second row) on the StrasBypass70 dataset." } ]
## Procedure-Aware Surgical Video-language Pretraining with Hierarchical Knowledge Augmentation Kun Yuan 1 2 3 , , Vinkle Srivastav 1 2 , Nassir Navab 3 Nicolas Padoy 1 2 , 1 University of Strasbourg, CNRS, INSERM, ICube, UMR7357, Strasbourg, France 2 IHU Strasbourg, Strasbourg, France 3 CAMP, Technische Universität München, Munich, Germany {kyuan,srivastav,npadoy}@unistra.fr [email protected] ## Abstract Surgical video-language pretraining (VLP) faces unique challenges due to the knowledge domain gap and the scarcity of multi-modal data. This study aims to bridge the gap by addressing issues regarding textual information loss in surgical lecture videos and the spatial-temporal challenges of surgical VLP. To tackle these issues, we propose a hierarchical knowledge augmentation approach and a novel Procedure-Encoded Surgical Knowledge-Augmented Video-Language Pretraining ( PeskaVLP ) framework. The proposed knowledge augmentation approach uses large language models (LLM) to refine and enrich surgical concepts, thus providing comprehensive language supervision and reducing the risk of overfitting. The PeskaVLP framework combines language supervision with visual self-supervision, constructing hard negative samples and employing a Dynamic Time Warping (DTW) based loss function to effectively comprehend the cross-modal procedural alignment. Extensive experiments on multiple public surgical scene understanding and cross-modal retrieval datasets show that our proposed method significantly improves zero-shot transferring performance and offers a generalist visual representation for further advancements in surgical scene understanding. The source code will be available at https://github.com/CAMMA-public/PeskaVLP . ## 1 Introduction The recent advancements in multi-modal representation learning, particularly with the introduction of CLIP [52], have led to the development of models capable of understanding a wide range of visual concepts using natural language supervision [34, 41]. The expressive natural language has allowed these models to shift from task-specific to more generalist applications [49, 82, 83]. The learned representations of these models are robust, facilitating effective performance across diverse visual tasks without the need for task-specific fine-tuning [68, 81]. However, despite the impressive progress made by these models in the general computer vision domain, the effectiveness of these methods in domain-specific settings remains uncertain. This concern is particularly relevant to the field of Surgical Data Science (SDS), an emerging interdisciplinary domain that utilizes deep learning and computer vision techniques to analyze surgical data [44, 43, 74]. A key component of SDS is the analysis of intraoperative surgical videos captured through endoscopes or laparoscopes. Analyzing these videos presents several unique challenges compared to the general computer vision datasets. Unlike general computer vision datasets [47, 52, 7], surgical videos can last several hours and capture complex and fine-grained activities within a narrow field of view. This requires development of computational approaches to decompose and model the surgical procedures at multiple hierarchical levels, including the entire (a) Clip-level Pairing Figure 1: Illustratsion of video-language pretraining with hierarchical video-text pairs. At phase- and videolevel, one parent-level text is paired to multiple child-level texts. <!-- image --> procedure [29], phases [67, 16], steps [54, 31], atomic actions [6, 8], and action triplets [50, 62]. Moreover, surgical language involves specialized vocabulary, and annotating videos requires clinical expertise, limiting dataset scalability. Consequently, current deep learning applications are restricted to single-centric, fully-supervised, and task-specific approaches [3, 6, 31, 50, 55, 57, 67, 69, 74]. To bridge the gap, recent efforts have focused on creating surgical video-text pretraining datasets by curating surgical lecture videos from online e-learning platforms and pairing them with transcribed narrations using audio speech recognition (ASR) methods. Subsequently, a CLIP-style model [76] is trained contrastively to match the video clips to their corresponding textual descriptions. Building on this, the HecVL approach introduces hierarchical texts, including phase-level keystep descriptions and video-level summaries that provide hierarchical goals of the surgical procedure [75]. However, challenges persist due to the smaller size of the surgical video-language pretraining dataset, noisy transcribed narrations, limited variability in phase-level descriptions, and strong temporal dependencies in surgical procedures, where actions and keysteps occur in a specific routine order. These issues hinder the accurate learning of multi-modal surgical representations. To address these challenges, we propose P rocedureE ncoded S urgical K nowledgeA ugmented V ideoL anguage P retraining (PeskaVLP), which boosts data efficacy and tackles the spatial-temporal challenges inherent in surgical procedures from two perspectives. First, we introduce hierarchical knowledge augmentation to mitigate the problem of textual information loss in surgical videolanguage pretraining datasets. We argue that the internal knowledge of LLMs serves as a valuable surgical knowledge base, enriching and correcting text descriptions while preserving the original key concepts and meanings. Therefore, we utilize the large language model (LLM) prompted with different behaviors as an external knowledge base to correct, explain, or summarize the hierarchical texts in the surgical video-language pretraining dataset, thus providing diverse and better language supervision for multi-modal pretraining. Additionally, it reduces the risk of overfitting by preventing the text encoder from repeatedly encountering the same keystep texts in each epoch. From the pretraining objective perspective, we perform the hierarchical video-language pretraining, as shown in Fig. 1, with a novel hierarchy-specific loss, LecNCE . Specifically, we combine language supervision with visual self-supervision at the clip-level pretraining to introduce additional supervision signals within vision modality, making the pretraining efficient with a small surgical dataset [76]. At phase- and video-level pretraining, we construct hard negative samples by reversing the order of texts, followed by a Dynamic Time Warping (DTW) based loss function to learn the temporal alignment between video frames and texts, thus facilitating the understanding of cross-modal procedural alignment during pretraining. We summarize our contributions as follows: First, we propose an LLM-based knowledge augmentation to handle surgery-specific textual information loss in the dataset, providing more densely interconnected natural language supervision from surgical lecture videos. Second, our proposed hierarchical video-language pretraining method enforces the understanding of the spatial-temporal characteristics of surgical lecture videos at different hierarchical levels. The pretrained PeskaVLP demonstrates state-of-the-art transferability and visual representation to different surgical scene understanding downstream datasets [67, 69, 31], across types of surgical procedures and clinical centers. It also shows strong multi-modal alignment ability through the cross-modal retrieval task at multiple hierarchical levels. ## 2 Related Works Surgical Video-Language Pretraining: many works have demonstrated the effectiveness of learning visual representations from the natural language supervision of corresponding text [7, 70, 77, 40, 46, 42, 34]. These methods conduct contrastive learning [51] to match the video clips (or images) with their corresponding narrations (or captions). Similarly in the medical field, recent works have started to curate large-scale multi-modal data through hospital-sourced chest radiological reports [28, 12] and online platforms [76, 27, 26], e.g., YouTube and Twitter, to perform vision-language pretraining. However, these works encounter the sample efficiency issue when handling the smaller surgical video-language pretraining dataset (SVL) [76]. Recent works improve the data efficacy and zero-shot performance of CLIP-style models [48, 37, 25]. However, they do not capture procedural dependency from the long-form surgical videos beyond the video clip and text matching. Hierarchical pretraining methods [4, 79, 75] propose to pair video clips of different durations to different hierarchical levels of texts, covering both short- and long-term understanding. Paprika [80] builds a procedural knowledge graph and elicits the knowledge node during the video-language pretraining process. Textual Augmentation with Knowledge Base: the success of vision-language pretraining is highly dependent on the quality and quantity of available multi-modal data. Recent research [38] shows that a smaller high-quality dataset can outperform a larger low-quality dataset. Common practices improve the quality by textual augmentation, including EDA [37], masked token modeling [65], and captioning loss [72]. Recent studies have used synthesized captions from captioning models to achieve notable improvements [33, 32, 58]. However, they show scalability deficiency and world knowledge loss in models trained with synthetic captions [73], which their initial benchmark success has largely obscured. To inject the knowledge, K-Lite [63] enriches the texts with WordNet [15] and Wiktionary [45] knowledge base. Merlot [78] learns script knowledge representations from millions of YouTube videos, however, a knowledge domain gap exists when applying this to the surgical field. The recent advent of large language models like GPT4 [2] and Llama series [66] have been a game-changer, as they encode rich domain-specific knowledge, e.g., clinical knowledge [64], motivating LaCLIP [14] to augment textual inputs through the LLM rewrites. ## 3 Approach ## 3.1 Dataset and Contrastive Learning Learning joint video and language embedding space requires a large-scale video-language dataset, however, such datasets are expensive and time-consuming to create in the surgical field. Therefore, the first surgical video-language pretraining dataset, i.e., SVL [76], is proposed by obtaining around a thousand surgical lecture videos from surgical education platforms. SVL collects ∼ 300 hours of lecture videos accompanied by narration texts obtained using Audio Speech Recognition (ASR) methods, providing ∼ 26 k video clip-narration pairs for contrastive video-language pretraining. Specifically, short video clips x c and their corresponding narration texts y n are treated as positive pairs P n , and the unpaired ones are treated as negative pairs N n . Then, the contrastive training loss Figure 2: Hierarchical Knowledge augmentation for hierarchical texts. (a) the process of building a surgical step knowledge base. (b) the process of improving hierarchical textual quality based on LLM. <!-- image --> InfoNCE [51] can be formulated as follows: <!-- formula-not-decoded --> where B represents the batch size. The f and g are visual and textual encoders that generate embedding vectors for videos and texts, respectively. This loss function aligns two modalities by increasing the cosine similarity between paired videos and texts and decreasing the unpaired ones, as shown in Fig. 1 (a). Despite reaching an impressive data scale, the imprecision of the ASR system and the scarcity of surgical lecture videos limit the natural language supervision from SVL. Therefore, HecVL [75] proposes to incorporate hierarchical language supervision by extracting additional phase-level keystep and video-level abstract texts from lecture videos' metadata, as shown in Fig. 1 (b) and (c). In this work, we use this hierarchical video-language pretraining dataset and perform hierarchical knowledge augmentation to improve the textual quality. ## 3.2 Hierarchical Knowledge Augmentation Quality of language supervision in the multi-modal representation learning matters [1, 37, 36], especially when the surgical video-language dataset is not 'big' enough, e.g., millions of multi-modal samples used in [52, 47], to sufficiently cover the visual-linguistic concepts. In this work, we find that the texts suffer from different types of degradation at different hierarchies, failing to provide accurate and broad concepts for pretraining. Specifically, as shown in Fig. 2, narration texts are mostly sentence fragments and easily affected by misspelling errors, therefore altering the original key concepts. The keystep texts are mostly short and abstract, resulting in a narrow set of linguistic concepts that could show poor transferability to the downstream datasets, which usually come with a different set of concepts [63, 18]. The abstract texts sometimes include redundant and useless information, such as author and citation information. To address the above hierarchy-specific textual degradation, we propose a hierarchical knowledge augmentation to correct, explain, and summarize the narration, the keystep, and the abstract texts, respectively, by eliciting LLM's encoded surgical knowledge [64]. For each hierarchy, we manually design the system prompt and several input-output examples for LLM. Thus, we obtain hierarchical LLM assistants with different behaviors of using internal surgical knowledge to augment the texts: Figure 3: The pretraining pipeline of different hierarchies. We combine language supervision and visual selfsupervision at clip-level pretraining. We conduct the procedure-aware contrastive learning at phase/video-level pretraining. <!-- image --> Narration. We ask the LLM to behave as a 'recipe' to come up with a list of sequential steps that complete the given surgery. For each lecture video, we feed its title as input and obtain the list of pseudo steps, as shown in Fig. 2 (a), building a surgical step knowledge base. Then, we assign these pseudo steps to narration texts based on textual similarity. This implicitly corrects the typos in transcribed narrations and augments the textual input based on the LLM's surgical knowledge. Keystep. As shown in Fig. 2 (b), we ask the LLM to behave like a 'dictionary' to explain the meaning of the keystep. Specifically, the LLM assistant expands the given keystep into a description of the main surgical events, anatomies, and instruments involved. This enlarges the textual semantic information of each keystep and provides more expressive language supervision for pertaining. Abstract. As shown in Fig. 2 (b), we ask the LLM to behave like a 'summarizer' that captures the key concepts of the given abstract texts, e.g., surgical type, anatomies, and so on. This reduces the length of the textual inputs while maintaining the main concepts of the abstract paragraph. In the following experiment, we randomly input the original or augmented texts for video-language pretraining. Check Appendix H for examples of pre- and post-augmented texts. ## 3.3 Procedure-aware Surgical Video-language Pretraining We introduce PeskaVLP, a procedure-aware pretraining framework for the above surgical knowledgeaugmented video-language dataset. We emphasize devising a pretraining objective LecNCE for the hierarchical video-text pairs. For clip-level pretraining, LecNCE clip combines language supervision with visual self-supervision to improve data efficiency and boost the scene understanding on visually similar laparoscopic images. LecNCE phase/video considers the procedure awareness during the coarser-level pretraining, through a DTW-based contrastive regularization objective with temporally reversed text sequences as negative samples. We apply the dual-encoder as our model architecture. ## 3.3.1 Clip-level Pretraining Language Supervision. The common pretraining objective for dual-encoder model is InfoNCE [51], as denoted in Eq. 1, where matched video text pairs are treated as positive while all other pairwise combinations in the batch are regarded as negative. In this work, we also apply InfoNCE to maximize the similarity between short-term video clips and their corresponding narration texts at the clip level, denoted as L vl clip . However, this simple objective is data hungry and sensitive to the weakly aligned noisy video-text pairs from small-scale surgical video-language datasets, such as SVL [76]. Visual Self-supervision. The proposed PeskaVLP approach introduces an additional supervision signal from visual self-supervision to complement noisy language supervision. Specifically, we explore the widespread supervision within visual modality to learn generic visual representation. We adopt the simple yet effective SimSiam [11] strategy that aims to maximize the similarity between two augmented views. As shown in Fig. 3 (a), during the pretraining, we apply random distortion on the frames of video clips and generate two augmented embedding vectors for one video clip. We then apply InfoNCE to maximize the similarity of these two augmented embeddings by treating them as positive pairs, denoted as L vv clip . This additional supervisory can learn visual features more efficiently and is robust to the distortion of surgical scene images. Finally, the LecNCE loss for clip-level pretraining is the sum of these two losses, denoted as LecNCE clip = L vl clip + L vv clip . ## 3.3.2 Phase-/Video-level Pretraining The surgical video-language pretraining presents a unique procedural challenge compared to the existing video-language methods [19, 47, 52, 71, 61]. The surgical actions and events occur in a certain order to follow the routine to complete the surgical phase and surgery, e.g., 'hook dissecting cystic duct' should happen before 'clipper cutting cystic duct' in the 'clipping cutting' phase of cholecystectomy surgery. However, prior contrastive learning objectives [46, 52, 19] omit this temporal dependency and limit the understanding of procedural knowledge in surgical lecture videos. Our proposed LecNCE training objective enables procedural understanding in phase- and videolevel pretraining by considering the cross-modal temporal alignment between video frames and text sequence. Specifically, hierarchical texts can form the parent-child correspondence, i.e., abstract (parent-level) and keystep (child-level) texts, keystep (parent-level) and narration (child-level) texts. As shown in Fig. 3 (b), each parent-level text A is paired with a video segment V = { v , ...v 1 T } , where the T is the number of frames of the video segment. A is also paired with a child-level text sequence B = { b 1 , ...b N } , where N is the length of this sequence. Then, we build the cost matrix C ∈ R T × N between video frames and child-level text sequence based on their embeddings, with each element c i,j computed by a distance function D . We adopt the same distance function from [21]: <!-- formula-not-decoded --> (2) Using this cost matrix C , we apply Dynamic Time Warping (DTW) to find the minimum cross-modal cost path that aligns the video frames to the text sequence, denoted as DTW C ( ) . We then make a reasonable assumption that the global semantics of the text sequence and its reversed version are distinct. Therefore, aligning the video frames to the text sequence should be easier, i.e., incur a lower alignment cost compared to aligning the same video frames when the text sequence is played in reverse. Following this assumption, we temporally reverse the child-level texts into ˆ = B { b n , ...b 1 } and build the cost matrix ˆ C between V and ˆ B , computing the minimum alignment cost DTW C ( ˆ ) . We then devise a DTW-based contrastive regularization using hinge loss as follows: <!-- formula-not-decoded --> where ϕ is the margin between positive and negative samples. This imposed regularization can support fine-grained multi-modal representation learning from weakly paired video frames and texts via temporal alignment. Unlike Paprika [80], which relies on a pretrained model [46], our phase-/video-level pretraining provides a direct, lightweight, and more adaptable methodology to unseen surgical domains. We do not require the adaption from any existing models, improving the generalization capability. Also, our pretraining process is procedure-aware in itself rather than modifying the representation in a second step, streamlining the process and increasing efficiency. We also apply the InfoNCE loss to maximize the similarity between the paired parent-level text, video segment, and child-level texts, denoted as L infonce . Note that the L infonce follows the same pipeline as in Fig. 1 (b) and (c). Finally, we achieve the loss LecNCE for phase- or video-level pretraining as LecNCE phase/video = L infonce + λL dtw , where λ is the hyper-parameter to scale two losses. Please refer to Appendix D for more details about dynamic time warping. Finally, we train the model in an alternating way, using the proposed hierarchical levels of learning objectives. We only train one set of visual and textual encoders for all three levels, ensuring the encoders are optimized for capturing both short-term and long-term semantics. We alternatively train with 25 batches of clip-level samples, followed by 15 and 115 batches of phase- and video-level samples. ## 4 Experiments Datasets. Our pretraining is conducted on the videos of SVL [76] dataset. The pertaining dataset includes hierarchical textual annotations from the metadata of the videos [75]. We evaluate our model on 3 publicly available surgical phase recognition downstream datasets, i.e., Cholec80 [67] (cholecystectomy) from Strasbourg center, AutoLaparo [69] (hysterectomy) from HongKong hospital, MultiBypass140 [31] (gastric bypass) from both Strasbourg (StrasBypass70) and Bern (BernBypass70) centers. These datasets contain untrimmed surgical workflows with frame-wise phase labels. We also evaluate pretrained model on the cross-modal retrieval task in multiple hierarchical levels with holdout videos in SVL-Retrieval [76]. Check Appendix A for more details about pretraining dataset. Training Parameters. We utilize the dual-encoder architecture with ResNet50 [23] as visual encoder and ClinicalBert [24] as textual encoder, respectively. We train the model with a batch size of 120 80 25 / / for clip-/phase-/video-level, respectively. We sample 4 16 64 / / frames for videos of clip/phase-/video-level. We use AdamW optimizer [30] with a learning rate of 5 e -5 . We train the model with 4 NVIDIA A 100 GPUs each having a DRAM of 80 GB for 200 epochs. Temperature parameter β for distance function and ϕ for DTW-base contrastive loss function D are fixed as 0 1 . . Scale factor λ is set as 0 01 . . Evaluation Setup. We evaluate pretrained models using two setups: Zero-Shot evaluation and Few/Full-shot Linear Probing evaluation. For Zero-Shot, we utilize class text prompts, the same as HecVL [75], to compute cosine similarities between image embedding and class text embeddings, classifying images based on the shortest distance. In Linear Probing, the pretrained visual encoder remains frozen when we extract features for each image, subsequently training a linear layer using the SGD optimizer. For few-shot linear probing, we train the linear layer with a few numbers of videos, referred to as k -% training, where k indicates the percentage of all the videos used in training. Check Appendix B for more details. Table 1: Zero-shot phase recognition results. We report Accuracy / F1-Score. PeskaVLP outperforms the other methods across different tasks. We report the state-of-the-art methods that are fine-tuned on the downstream dataset in a fully-supervised manner. However, models fine-tuned on specific downstream datasets show limited generalizability across procedures and institutions. | Model | Dataset | Cholec80 | Autolaparo | StrasBypass70 | BernBypass70 | Average | |----------------|-------------|-------------|--------------|-----------------|----------------|-------------| | TransVNet [17] | Cholec80 | 90.3 / - | - / - 82.0 / | - / - | - / - - | - / - - | | | Autolaparo | - / - | - | - / - | - / | - / | | ResNet50 [31] | BernBypass | - / - | - / - | 57.3 / 32.7 | 85.3 / 62.4 | - / - | | ResNet50 [31] | StrasBypass | - / - | - / - | 90.2 / 79.9 | 56.7 / 29.5 | - / - | | MIL-NCE [46] | Howto100M | 7.8 / 7.3 | 9.9 / 7.9 | 5.6 / 3.1 | 2.4 / 2.1 | 6.4 / 5.1 | | | CLIP400M | 30.8 / 13.1 | 17.4 / 9.1 | 16.9 / 5.5 | 14.8 / 4.1 | 19.9 / 8.0 | | CLIP [52] | Scratch | 29.4 / 10.4 | 15.3 / 10.9 | 6.3 / 3.5 | 4.9 / 2.3 | 14.0 / 6.8 | | | SVL | 33.8 / 19.6 | 18.9 / 16.2 | 15.8 / 8.6 | 17.8 / 7.1 | 21.6 / 12.9 | | SurgVLP [76] | SVL | 34.7 / 24.4 | 21.3 / 16.6 | 10.8 / 6.9 | 11.4 / 7.2 | 19.6 / 13.8 | | HecVL [75] | SVL | 41.7 / 26.3 | 23.3 / 18.9 | 26.9 / 18.3 | 22.8 / 13.6 | 28.7 / 19.3 | | PeskaVLP | SVL | 45.1 / 34.2 | 26.5 / 23.6 | 46.7 / 28.6 | 45.7 / 22.6 | 41.0 / 27.1 | ## 4.1 Zero-shot Surgical Phase Recognition High-quality Surgical Video-language Dataset. As shown in Table 1, our approach achieves a significant performance improvement over the baselines MIL-NCE [46] and CLIP [52] pretrained on the natural computer vision datasets, even though our pretraining dataset is 10 000 , times smaller than those. Note that when the CLIP model is randomly initialized and then trained with SVL, its performance declines compared to initializing from OpenAI. This shows that our surgical videolanguage pretraining dataset lacks the scale necessary to adequately pretrain a robust video-language model from scratch. ViT [13, 9] architectures are sensitive to initialization and excluded from this work. Further insights into the impact of initialization can be found in Appendix C. Transferability across Surgical Procedures and Centers. Compared to the HecVL, our method achieves over 12.3% and 7.8% improvement in absolute accuracy and f1, thanks to our spatialtemporal LecNCE learning objective across multiple hierarchies. Also, the consistent boost on cholecystectomy [67], hysterectomy [69], and gastric bypass [ ? ] procedures show the generalizable and transferable features of PeskaVLP. Comparing the results of StrasBypass and BernBypass, we find that PeskaVLP can recognize the phases of the same kind of surgery (gastric bypass), even if these surgeries are performed in different centers and follow different procedural routines. More qualitative results can be found in Appendix F. ## 4.2 Zero-shot Cross-modal Retrieval Table 2: We present cross-modal retrieval results on the holdout videos, highlighting the best performance in each setting in bold. We additionally include coarser-grained phase-keystep and abstract-video text pairs to assess long-term video and high-level textual understanding. | | Clip-Narration | Clip-Narration | Clip-Narration | Phase-Keystep | Phase-Keystep | Phase-Keystep | Video-Abstract | Video-Abstract | Video-Abstract | |--------------|-------------------|-------------------|-------------------|-------------------|-------------------|-------------------|-------------------|-------------------|-------------------| | method | R@1 | R@5 | R@10 | R@1 | R@5 | R@10 | R@1 | R@5 | R@10 | | | Text-to-Image (%) | Text-to-Image (%) | Text-to-Image (%) | Text-to-Image (%) | Text-to-Image (%) | Text-to-Image (%) | Text-to-Image (%) | Text-to-Image (%) | Text-to-Image (%) | | CLIP [52] | 2.9 | 5.2 | 6.7 | 1.7 | 3.2 | 6.3 | 1.2 | 11.7 | 25.8 | | SurgVLP [76] | 2.8 | 11.8 | 16.1 | 1.6 | 6.8 | 11.6 | 1.3 | 8.2 | 15.5 | | HecVL [75] | 2.7 | 11.3 | 17.2 | 3.9 | 13.7 | 21.3 | 28.2 | 74.1 | 82.3 | | PeskaVLP | 3.2 | 13.2 | 23.3 | 6.1 | 21.0 | 35.4 | 38.8 | 75.3 | 85.9 | | | Image-to-Text (%) | Image-to-Text (%) | Image-to-Text (%) | Image-to-Text (%) | Image-to-Text (%) | Image-to-Text (%) | Image-to-Text (%) | Image-to-Text (%) | Image-to-Text (%) | | CLIP [52] | 1.8 | 3.9 | 6.0 | 0.3 | 1.2 | 2.7 | 0 | 7.0 | 16.4 | | SurgVLP [76] | 1.3 | 8.6 | 13.5 | 1.0 | 4.1 | 7.3 | 1.3 | 8.6 | 14.6 | | HecVL [75] | 2.1 | 9.0 | 16.2 | 1.9 | 8.3 | 14.8 | 21.2 | 65.9 | 71.8 | | PeskaVLP | 2.4 | 13.1 | 21.3 | 3.4 | 14.9 | 24.8 | 38.8 | 75.3 | 81.1 | In our study, we evaluate pretrained models' cross-modal alignment efficacy by conducting both zero-shot text-to-image and image-to-text retrieval tasks in multiple hierarchical levels. We report the Recall@N metric by identifying the retrieved nearest neighbors for each query and then determining whether the corresponding ground truth element is within the top N nearest neighbors, where N ∈ { 1 5 10 , , } . Table 2 shows that our PeskaVLP achieves superior performance due to the procedureaware learning objective in hierarchical pretraining. Particularly, the hierarchical pretraining scheme significantly boosts the cross-modal retrieval at the coarse-grained video-text pairs, comprehending the relationship between long video segments and high-level sentences with surgical terms. ## 4.3 Few-/Full-shot Linear Probing General Visual Representation for Surgical Scene Understanding. We present the few- and full-shot linear-probing evaluation in Table 3. It shows that the learned visual representation from PeskaVLP provides a general visual representation for surgical scene understanding across surgical procedures. We also find that the MoCo v2 [55, 22] pretrained on the frames of the SVL dataset (second row of Table 3) in a visual self-supervised manner achieves better visual representation than pretraining on a public dataset that only contains one type of surgery, e.g., Cholec80 (third row in Table 3). This shows that the cross-procedure surgical pretraining dataset enables better generalizationability. Knowledge Augmentation and Hierarchical Pretraining. Interestingly, the model pretrained contrastively with short video clips and narrations (SurgVLP) performs worse than MoCo v2 [55, 22] (second row in Table 3) in linear probing evaluation. This may be because the noisy narrations do not provide accurate natural language supervision for visual representation learning, thus highlighting the Table 3: Linear-probing evaluation results. V: supervision is from visual frames. L: supervision is from natural languages. VL: supervision is from both visual and language entities. | Model | Dataset | k-% | Cholec80 | Autolaparo | StrasBypass70 | BernBypass70 | |--------------|--------------|-------|-------------|--------------|-----------------|----------------| | ImageNet | ImageNet (V) | 100 | 66.4 / 54.9 | 57.5 / 44.9 | 66.2 / 53.6 | 64.7 / 31.6 | | | | 10 | 57.4 / 42.3 | 44.9 / 30.4 | 53.3 / 42.1 | 53.3 / 25.6 | | MoCo v2 [55] | SVL (V) | 100 | 68.2 / 55.8 | 59.5 / 48.4 | 71.6 / 58.1 | 69.6 / 36.5 | | MoCo v2 [55] | | 10 | 57.6 / 43.5 | 49.9 / 34.6 | 63.1 / 49.3 | 59.1 / 29.9 | | MoCo v2 [55] | Cholec80 (V) | 100 | 73.4 / 62.8 | 51.3 / 37.4 | 67.8 / 55.4 | 66.0 / 33.1 | | MoCo v2 [55] | | 10 | 69.6 / 56.9 | 45.4 / 31.7 | 58.1 / 45.2 | 52.7 / 25.7 | | CLIP [52] | NA (L) | 100 | 64.8 / 50.7 | 58.5 / 46.1 | 65.4 / 50.6 | 64.1 / 33.3 | | CLIP [52] | | 10 | 57.5 / 40.0 | 46.2 / 31.4 | 54.3 / 42.1 | 52.8 / 27.9 | | CLIP [52] | | 100 | 64.9 / 55.0 | 53.1 / 42.1 | 69.1 / 55.7 | 68.2 / 35.2 | | CLIP [52] | SVL (L) | 10 | 58.9 / 42.3 | 45.3 / 35.3 | 58.2 / 45.2 | 56.5 / 29.8 | | SurgVLP [76] | | 100 | 63.5 / 50.3 | 54.3 / 41.8 | 65.8 / 50.0 | 66.5 / 34.3 | | SurgVLP [76] | SVL (L) | 10 | 55.0 / 39.9 | 48.5 / 32.0 | 57.0 / 44.0 | 57.7 / 28.5 | | HecVL [75] | SVL (L) | 100 | 66.0 / 53.2 | 56.9 / 44.2 | 69.8 / 54.9 | 70.0 / 34.4 | | HecVL [75] | | 10 | 56.1 / 40.3 | 46.9 / 32.1 | 60.2 / 46.8 | 59.3 / 31.2 | | PeskaVLP | SVL (VL) | 100 | 69.9 / 59.8 | 63.1 / 49.7 | 71.4 / 59.5 | 71.5 / 37.4 | | PeskaVLP | | 10 | 61.9 / 50.6 | 53.1 / 36.8 | 63.8 / 50.4 | 62.9 / 32.7 | Table 4: Ablation study on different modifications. Knowledge: knowledge augmentation applied to the pretraining dataset at phase-level (P) and video-level texts (V). P/V: procedure-aware pretraining learning objective at phase and video-level. C: the integration of language and visual self-supervision at clip-level pretraining. We report 10 % -shot linear probing in this table. | LecNCE | LecNCE | Knowledge | Knowledge | Zero-shot | Zero-shot | Linear-probing | Linear-probing | |----------|----------|-------------|-------------|---------------|--------------|------------------|------------------| | P/V | C | P | V | Cholec80 | Autolaparo | Cholec80 | Autolaparo | | × | × | × | × | 41.7 / 26.3 | 23.3 / 18.9 | 56.1 / 40.3 | 46.9 / 32.1 | | × | ✓ | × | × | 45.5 / 31.0 | 25.3 / 20.0 | - / - | - / - | | × | × | ✓ | ✓ | 42.4 / 28.1 | 24.9 / 20.4 | 58.1 / 43.2 | 48.5 / 34.7 | | × | ✓ | ✓ | ✓ | 43.4 / 30.3 | 28.3 / 24.5 | 60.4 / 48.6 | 53.8 / 39.2 | | ✓ | ✓ | ✓ | × | 44.0 / 31.8 | - / - | - / - | - / - | | ✓ | ✓ | × | ✓ | 43.7 / 30.6 | - / - | - / - | - / - | | ✓ | ✓ | ✓ | ✓ | 45.1 / 34.2 | 26.5 / 23.6 | 61.9 / 50.6 | 53.1 / 36.8 | | | | | | StrasBypass70 | BernBypass70 | StrasBypass70 | BernBypass70 | | × | × | ✓ | ✓ | 26.9 / 18.3 | 22.8 / 13.6 | 60.2 / 46.8 | 59.3 / 31.2 | | × | × | ✓ | ✓ | 32.3 / 21.2 | 23.8 / 17.5 | 62.6 / 47.7 | 60.3 / 32.3 | | × | ✓ | ✓ | ✓ | 39.8 / 23.7 | 25.7 / 21.3 | 63.5 / 48.6 | 62.2 / 32.0 | | ✓ | ✓ | ✓ | ✓ | 45.1 / 34.2 | 26.5 / 23.6 | 63.8 / 50.4 | 62.9 / 32.7 | importance of visual self-supervision and textual quality. Our model surpasses the prior methods by a large margin, showing the efficacy of our hierarchical knowledge augmentation, which denoises the text and improves textual quality. Also, our proposed LecNCE promotes the visual encoder through additional visual self-supervision and procedural understanding. We present t-SNE visualizations of learned features in Appendix E, which shows that our multi-modal representations exhibit a smaller modality gap, enhancing transferability to vision-and-language downstream tasks [20, 39]. ## 4.4 Ablation Studies Effect of Knowledge Augmentation. Table 4 presents the effect of our proposed LLM-based hierarchical knowledge-aware augmentation strategy, applied to the texts of SVL dataset. The first row of the table corresponds to HecVL [75] pretrained on SVL with only conventional visual augmentations, e.g., blurring and so on, without any knowledge augmentation. The results clearly demonstrate that simple visual augmentation strategies exhibit poor robustness as the texts of SVL are noisy and not diverse enough. Conversely, our knowledge-aware text augmentation consistently improves performance across multiple surgical datasets, highlighting the importance of the textual quality of the surgical video-language pretraining dataset. We found that integrating visual self-supervision with language supervision significantly enhances performance in surgical scene understanding tasks across downstream datasets. Additionally, using a procedure-aware learning objective improves surgical phase recognition for routine procedures, such as cholecystectomy (Cholec80), more effectively than complex procedures, like hysterectomy (Autolaparo). Effect of Pretraining Objective. Table 4 shows the impact of our learning objective for hierarchical surgical video-language pretraining. When we append visual self-supervision to language supervision at the clip-level pretraining, the zero-shot performance is clearly improved. This improvement can be attributed to the added diverse and high-quality supervision. Also, the boost at linear-probing evaluation shows that the combination of language supervision and visual self-supervision leads to a robust visual representation especially with a moderate size of surgical video-language dataset, e.g., SVL. Table 4 also highlights that the inclusion of LecNCE with procedure understanding consistently improves performance across most downstream datasets, leading to enhanced accuracy in both zero-shot and linear-probing. However, performance on the AutoLaparo degrades with this modification. This may be due to challenging or less routined surgical procedures in the pretraining dataset. ## 5 Conclusion, Limitations and Broader Impact Conclusion. We have introduced a surgical video-language pretraining method for long-term surgical lecture videos and their hierarchical paired texts. Our proposed knowledge augmentation addresses the hierarchical textual information loss by integrating the large language model's internal surgical knowledge. Also, we propose a novel spatial-temporal pretraining objective for video-text pairs of different hierarchies, which addresses the lack of supervision signals problem in a small surgical vision-language dataset. The proposed LecNCE also addresses the procedural awareness problem, benefiting the long-term cross-modal understanding. The experiments show that our proposed PeskaVLP achieves the state-of-the-art generalized zero-shot ability and visual representation learning that can serve as a general initialization for many surgical scene understanding tasks. Limitations. While our LLM-augmented strategy enhances textual information, it may overly standardize the text, raising concerns about overfitting during pretraining. Therefore, it is crucial to strike a balance between leveraging LLM capabilities and maintaining the variability present in real-world surgical narratives. To address this, future work will explore incorporating diverse audio inputs and spontaneous narratives into the pretraining process, ensuring that the model retains robustness and adaptability in real-world applications. Additionally, even though the SVL pretraining dataset covers diverse laparoscopic surgeries, it lacks surgeries in different organs, such as the brain and heart. To address this, we plan to expand the pretraining dataset using diverse media such as textbooks, instructional videos, and intraoperative video recordings from diverse sources. We also aim to diversify the pretraining dataset by considering laparoscopic, endoscopic, and microscopic surgeries on different organs, to further mitigate the risk of overfitting and enhance the model's generalizability. Broader Impact. The primary goal of surgical data science is to develop novel context-aware support systems for the operating room by collecting large-scale surgical data and analyzing it with modern AI techniques, eventually improving the safety and efficacy of surgical outcomes. The recent advancements in vision-language-based multi-modal AI offer significant potential in achieving this goal by enabling the development of more robust and generalizable models. These multi-modal systems have the potential to support clinical decision-making, streamline surgical workflows, provide real-time intra-operative guidance to improve surgical precision, reduce errors, and optimize outcomes in the operating room. During the development, patient data privacy should be considered as a fundamental ethical requirement. These systems developed on real-world surgical data also hold transformative potential in medical education, enhancing training and skill development in both novice and experienced surgeons. ## Acknowledgements We would like to extend our deep appreciation to the education platforms, such as Websurg (IRCAD), EAES, and YouTube, for their dedication to providing high-quality educational content freely accessible to learners worldwide. We are especially grateful to the clinicians who have generously contributed their time and expertise to create and share content on these platforms, making this research possible. This work has received funding from the European Union (ERC, CompSURG, 101088553). Views and opinions expressed are however those of the authors only and do not necessarily reflect those of the European Union or the European Research Council. Neither the European Union nor the granting authority can be held responsible for them. This work was also partially supported by French state funds managed by the ANR under Grants ANR-20-CHIA-0029-01 and ANR-10-IAHU-02. This work was granted access to the HPC resources of IDRIS under the allocations AD011013704R1, AD011011631R2, and AD011011631R4 made by GENCI. The authors would like to acknowledge the High-Performance Computing Center of the University of Strasbourg for supporting this work by providing scientific support and access to computing resources. Part of the computing resources were funded by the Equipex Equip@Meso project (Programme Investissements d'Avenir) and the CPER Alsacalcul/Big Data. ## References - [1] Amro Abbas, Kushal Tirumala, Dániel Simig, Surya Ganguli, and Ari S Morcos. Semdedup: Data-efficient learning at web-scale through semantic deduplication. arXiv preprint arXiv:2303.09540 , 2023. - [2] Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, et al. Gpt-4 technical report. arXiv preprint arXiv:2303.08774 , 2023. - [3] Deepak Alapatt, Aditya Murali, Vinkle Srivastav, AI4SafeChole Consortium, Pietro Mascagni, and Nicolas Padoy. Jumpstarting surgical computer vision. In International Conference on Medical Image Computing and Computer-Assisted Intervention , pages 328-338. Springer, 2024. - [4] Kumar Ashutosh, Rohit Girdhar, Lorenzo Torresani, and Kristen Grauman. Hiervl: Learning hierarchical video-language embeddings. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages 23066-23078, 2023. - [5] AWS. Amazon transcribe medical, 2023. - [6] Nicolás Ayobi, Santiago Rodríguez, Alejandra Pérez, Isabela Hernández, Nicolás Aparicio, Eugénie Dessevres, Sebastián Peña, Jessica Santander, Juan Ignacio Caicedo, Nicolás Fernández, et al. Pixel-wise recognition for holistic surgical scene understanding. arXiv preprint arXiv:2401.11174 , 2024. - [7] Max Bain, Arsha Nagrani, Gül Varol, and Andrew Zisserman. Frozen in time: A joint video and image encoder for end-to-end retrieval. In Proceedings of the IEEE/CVF International Conference on Computer Vision , pages 1728-1738, 2021. - [8] Vivek Singh Bawa, Gurkirt Singh, Francis KapingA, Inna Skarga-Bandurova, Elettra Oleari, Alice Leporini, Carmela Landolfo, Pengfei Zhao, Xi Xiang, Gongning Luo, et al. The saras endoscopic surgeon action detection (esad) dataset: Challenges and methods. arXiv preprint arXiv:2104.03178 , 2021. - [9] Gedas Bertasius, Heng Wang, and Lorenzo Torresani. Is space-time attention all you need for video understanding? In ICML , volume 2, page 4, 2021. - [10] Mathilde Caron, Hugo Touvron, Ishan Misra, Hervé Jégou, Julien Mairal, Piotr Bojanowski, and Armand Joulin. Emerging properties in self-supervised vision transformers. In Proceedings of the IEEE/CVF International Conference on Computer Vision , pages 9650-9660, 2021. - [11] Xinlei Chen and Kaiming He. Exploring simple siamese representation learning. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition , pages 15750-15758, 2021. - [12] Zhihong Chen, Maya Varma, Jean-Benoit Delbrouck, Magdalini Paschali, Louis Blankemeier, Dave Van Veen, Jeya Maria Jose Valanarasu, Alaa Youssef, Joseph Paul Cohen, Eduardo Pontes Reis, et al. Chexagent: Towards a foundation model for chest x-ray interpretation. arXiv preprint arXiv:2401.12208 , 2024. | [13] | Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 , 2020. | |--------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | [14] | Lijie Fan, Dilip Krishnan, Phillip Isola, Dina Katabi, and Yonglong Tian. Improving clip training with language rewrites. Advances in Neural Information Processing Systems , 36, 2024. | | [15] | Christiane Fellbaum. WordNet: An electronic lexical database . MIT press, 1998. | | [16] | Isabel Funke, Dominik Rivoir, Stefanie Krell, and Stefanie Speidel. Tunes: A temporal u-net with self-attention for video-based surgical phase recognition. arXiv preprint arXiv:2307.09997 , 2023. | | [17] | Xiaojie Gao, Yueming Jin, Yonghao Long, Qi Dou, and Pheng-Ann Heng. Trans-svnet: Accurate phase recognition from surgical videos via hybrid embedding aggregation transformer. In Medical Image Computing and Computer Assisted Intervention-MICCAI 2021: 24th International Conference, Strasbourg, France, September 27-October 1, 2021, Proceedings, Part IV 24 , pages 593-603. Springer, 2021. | | [18] | Robert Geirhos, Patricia Rubisch, Claudio Michaelis, Matthias Bethge, Felix A Wichmann, and Wieland Brendel. Imagenet-trained cnns are biased towards texture; increasing shape bias improves accuracy and robustness. arXiv preprint arXiv:1811.12231 , 2018. | | [19] | Kristen Grauman, Andrew Westbury, Eugene Byrne, Zachary Chavis, Antonino Furnari, Rohit Girdhar, Jackson Hamburger, Hao Jiang, Miao Liu, Xingyu Liu, et al. Ego4d: Around the world in 3,000 hours of egocentric video. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages 18995-19012, 2022. | | [20] | Sophia Gu, Christopher Clark, and Aniruddha Kembhavi. I can't believe there's no images! learning visual tasks using only language supervision. In Proceedings of the IEEE/CVF International Conference on Computer Vision , pages 2672-2683, 2023. | | [21] | Isma Hadji, Konstantinos G Derpanis, and Allan D Jepson. Representation learning via global temporal alignment and cycle-consistency. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages 11068-11077, 2021. | | [22] | Kaiming He, Haoqi Fan, Yuxin Wu, Saining Xie, and Ross Girshick. Momentum contrast for unsupervised visual representation learning. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition , pages 9729-9738, 2020. | | [23] | Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition , pages 770-778, 2016. | | [24] | Kexin Huang, Jaan Altosaar, and Rajesh Ranganath. Clinicalbert: Modeling clinical notes and predicting hospital readmission. arXiv preprint arXiv:1904.05342 , 2019. | | [25] | Shih-Cheng Huang, Liyue Shen, Matthew P Lungren, and Serena Yeung. Gloria: A multimodal global- local representation learning framework for label-efficient medical image recognition. In Proceedings of the IEEE/CVF International Conference on Computer Vision , pages 3942-3951, 2021. | | [26] | Zhi Huang, Federico Bianchi, Mert Yuksekgonul, Thomas J Montine, and James Zou. A visual-language foundation model for pathology image analysis using medical twitter. Nature medicine , 29(9):2307-2316, 2023. | | [27] | Wisdom Ikezogwo, Saygin Seyfioglu, Fatemeh Ghezloo, Dylan Geva, Fatwir Sheikh Mohammed, Pa- van Kumar Anand, Ranjay Krishna, and Linda Shapiro. Quilt-1m: One million image-text pairs for histopathology. Advances in Neural Information Processing Systems , 36, 2024. | | [28] | Alistair EWJohnson, Tom J Pollard, Nathaniel R Greenbaum, Matthew P Lungren, Chih-ying Deng, Yifan Peng, Zhiyong Lu, Roger G Mark, Seth J Berkowitz, and Steven Horng. Mimic-cxr-jpg, a large publicly available database of labeled chest radiographs. arXiv preprint arXiv:1901.07042 , 2019. | | [29] | Siddharth Kannan, Gaurav Yengera, Didier Mutter, Jacques Marescaux, and Nicolas Padoy. Future-state predicting lstm for early surgery type recognition. IEEE Transactions on Medical Imaging , 39(3):556-566, 2019. | | [30] | Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint | arXiv:1412.6980 , 2014. | [31] | Joël L Lavanchy, Sanat Ramesh, Diego Dall'Alba, Cristians Gonzalez, Paolo Fiorini, Beat P Müller- Stich, Philipp C Nett, Jacques Marescaux, Didier Mutter, and Nicolas Padoy. Challenges in multi-centric generalization: phase and step recognition in roux-en-y gastric bypass surgery. International journal of computer assisted radiology and surgery , pages 1-9, 2024. | |--------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | [32] | Junnan Li, Dongxu Li, Silvio Savarese, and Steven Hoi. Blip-2: Bootstrapping language-image pre-training with frozen image encoders and large language models. In International conference on machine learning , pages 19730-19742. PMLR, 2023. | | [33] | Junnan Li, Dongxu Li, Caiming Xiong, and Steven Hoi. Blip: Bootstrapping language-image pre-training for unified vision-language understanding and generation. In International conference on machine learning , pages 12888-12900. PMLR, 2022. | | [34] | Kunchang Li, Yali Wang, Yizhuo Li, Yi Wang, Yinan He, Limin Wang, and Yu Qiao. Unmasked teacher: Towards training-efficient video foundation models. In Proceedings of the IEEE/CVF International Conference on Computer Vision , pages 19948-19960, 2023. | | [35] | Wei Li, Linchao Zhu, Longyin Wen, and Yi Yang. Decap: Decoding clip latents for zero-shot captioning via text-only training. arXiv preprint arXiv:2303.03032 , 2023. | | [36] | Xianhang Li, Zeyu Wang, and Cihang Xie. An inverse scaling law for clip training. Advances in Neural Information Processing Systems , 36, 2024. | | [37] | Yangguang Li, Feng Liang, Lichen Zhao, Yufeng Cui, Wanli Ouyang, Jing Shao, Fengwei Yu, and Junjie Yan. Supervision exists everywhere: A data efficient contrastive language-image pre-training paradigm. arXiv preprint arXiv:2110.05208 , 2021. | | [38] | Zichao Li, Cihang Xie, and Ekin Dogus Cubuk. Scaling (down) clip: A comprehensive analysis of data, architecture, and training strategies. arXiv preprint arXiv:2404.08197 , 2024. | | [39] | Victor Weixin Liang, Yuhui Zhang, Yongchan Kwon, Serena Yeung, and James Y Zou. Mind the gap: Understanding the modality gap in multi-modal contrastive representation learning. Advances in Neural Information Processing Systems , 35:17612-17625, 2022. | | [40] | Kevin Qinghong Lin, Jinpeng Wang, Mattia Soldan, Michael Wray, Rui Yan, Eric Z Xu, Difei Gao, Rong-Cheng Tu, Wenzhe Zhao, Weijie Kong, et al. Egocentric video-language pretraining. Advances in Neural Information Processing Systems , 35:7575-7586, 2022. | | [41] | Timo Lüddecke and Alexander Ecker. Image segmentation using text and image prompts. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition , pages 7086-7096, 2022. | | [42] | Huaishao Luo, Lei Ji, Botian Shi, Haoyang Huang, Nan Duan, Tianrui Li, Jason Li, Taroon Bharti, and Ming Zhou. Univl: A unified video and language pre-training model for multimodal understanding and generation. arXiv preprint arXiv:2002.06353 , 2020. | | [43] | Lena Maier-Hein, Matthias Eisenmann, Duygu Sarikaya, Keno März, Toby Collins, Anand Malpani, Johannes Fallert, Hubertus Feussner, Stamatia Giannarou, Pietro Mascagni, et al. Surgical data science- from concepts toward clinical translation. Medical image analysis , 76:102306, 2022. | | [44] | Lena Maier-Hein, Swaroop S Vedula, Stefanie Speidel, Nassir Navab, Ron Kikinis, Adrian Park, Matthias Eisenmann, Hubertus Feussner, Germain Forestier, Stamatia Giannarou, et al. Surgical data science for next-generation interventions. Nature Biomedical Engineering , 1(9):691-696, 2017. | | [45] | Christian MMeyer and Iryna Gurevych. Wiktionary: A new rival for expert-built lexicons? Exploring the possibilities of collaborative lexicography . na, 2012. | | [46] | Antoine Miech, Jean-Baptiste Alayrac, Lucas Smaira, Ivan Laptev, Josef Sivic, and Andrew Zisserman. End-to-end learning of visual representations from uncurated instructional videos. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages 9879-9889, 2020. | | [47] | Antoine Miech, Dimitri Zhukov, Jean-Baptiste Alayrac, Makarand Tapaswi, Ivan Laptev, and Josef Sivic. Howto100m: Learning a text-video embedding by watching hundred million narrated video clips. In Proceedings of the IEEE/CVF International Conference on Computer Vision , pages 2630-2640, 2019. | | [48] | Norman Mu, Alexander Kirillov, David Wagner, and Saining Xie. Slip: Self-supervision meets language- image pre-training. In European conference on computer vision , pages 529-544. Springer, 2022. | | [49] | Bolin Ni, Houwen Peng, Minghao Chen, Songyang Zhang, Gaofeng Meng, Jianlong Fu, Shiming Xiang, and Haibin Ling. Expanding language-image pretrained models for general video recognition. In European Conference on Computer Vision , pages 1-18. Springer, 2022. | | [50] | Chinedu Innocent Nwoye, Tong Yu, Cristians Gonzalez, Barbara Seeliger, Pietro Mascagni, Didier Mutter, Jacques Marescaux, and Nicolas Padoy. Rendezvous: Attention mechanisms for the recognition of surgical action triplets in endoscopic videos. Medical Image Analysis , 78:102433, 2022. | |--------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | [51] | Aaron van den Oord, Yazhe Li, and Oriol Vinyals. Representation learning with contrastive predictive coding. arXiv preprint arXiv:1807.03748 , 2018. | | [52] | Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In International conference on machine learning , pages 8748-8763. PMLR, 2021. | | [53] | Alec Radford, Jong Wook Kim, Tao Xu, Greg Brockman, Christine McLeavey, and Ilya Sutskever. Robust speech recognition via large-scale weak supervision. In International Conference on Machine Learning , pages 28492-28518. PMLR, 2023. | | [54] | Sanat Ramesh, Diego Dall'Alba, Cristians Gonzalez, Tong Yu, Pietro Mascagni, Didier Mutter, Jacques Marescaux, Paolo Fiorini, and Nicolas Padoy. Multi-task temporal convolutional networks for joint recognition of surgical phases and steps in gastric bypass procedures. International journal of computer assisted radiology and surgery , 16:1111-1119, 2021. | | [55] | Sanat Ramesh, Vinkle Srivastav, Deepak Alapatt, Tong Yu, Aditya Murali, Luca Sestini, Chinedu Innocent Nwoye, Idris Hamoud, Saurav Sharma, Antoine Fleurentin, et al. Dissecting self-supervised learning methods for surgical computer vision. Medical Image Analysis , 88:102844, 2023. | | [56] | Tal Ridnik, Emanuel Ben-Baruch, Asaf Noy, and Lihi Zelnik-Manor. Imagenet-21k pretraining for the masses. arXiv preprint arXiv:2104.10972 , 2021. | | [57] | Dominik Rivoir, Sebastian Bodenstedt, Isabel Funke, Felix von Bechtolsheim, Marius Distler, Jürgen Weitz, and Stefanie Speidel. Rethinking anticipation tasks: Uncertainty-aware anticipation of sparse surgical instrument usage for context-aware assistance. In International Conference on Medical Image Computing and Computer-Assisted Intervention , pages 752-762. Springer, 2020. | | [58] | Noam Rotstein, David Bensaïd, Shaked Brody, Roy Ganz, and Ron Kimmel. Fusecap: Leveraging large language models for enriched fused image captions. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision , pages 5689-5700, 2024. | | [59] | Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, et al. Imagenet large scale visual recognition challenge. International journal of computer vision , 115:211-252, 2015. | | [60] | Hiroaki Sakoe and Seibi Chiba. Dynamic programming algorithm optimization for spoken word recognition. IEEE transactions on acoustics, speech, and signal processing , 26(1):43-49, 1978. | | [61] | Pierre Sermanet, Corey Lynch, Yevgen Chebotar, Jasmine Hsu, Eric Jang, Stefan Schaal, Sergey Levine, and Google Brain. Time-contrastive networks: Self-supervised learning from video. In 2018 IEEE international conference on robotics and automation (ICRA) , pages 1134-1141. IEEE, 2018. | | [62] | Saurav Sharma, Chinedu Innocent Nwoye, Didier Mutter, and Nicolas Padoy. Surgical action triplet detection by mixed supervised learning of instrument-tissue interactions. In International Conference on Medical Image Computing and Computer-Assisted Intervention , pages 505-514. Springer, 2023. | | [63] | Sheng Shen, Chunyuan Li, Xiaowei Hu, Yujia Xie, Jianwei Yang, Pengchuan Zhang, Zhe Gan, Lijuan Wang, Lu Yuan, Ce Liu, et al. K-lite: Learning transferable visual models with external knowledge. Advances in Neural Information Processing Systems , 35:15558-15573, 2022. | | [64] | Karan Singhal, Shekoofeh Azizi, Tao Tu, S Sara Mahdavi, Jason Wei, Hyung Won Chung, Nathan Scales, Ajay Tanwani, Heather Cole-Lewis, Stephen Pfohl, et al. Large language models encode clinical knowledge. Nature , 620(7972):172-180, 2023. | | [65] | Chen Sun, Austin Myers, Carl Vondrick, Kevin Murphy, and Cordelia Schmid. Videobert: A joint model for video and language representation learning. In Proceedings of the IEEE/CVF international conference on computer vision , pages 7464-7473, 2019. | | [66] | Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, et al. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971 , 2023. | | [67] | Andru P Twinanda, Sherif Shehata, Didier Mutter, Jacques Marescaux, Michel De Mathelin, and Nicolas Padoy. Endonet: a deep architecture for recognition tasks on laparoscopic videos. IEEE transactions on medical imaging , 36(1):86-97, 2016. | |--------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | [68] | Bairui Wang, Lin Ma, Wei Zhang, and Wei Liu. Reconstruction network for video captioning. In Proceedings of the IEEE conference on computer vision and pattern recognition , pages 7622-7631, 2018. | | [69] | Ziyi Wang, Bo Lu, Yonghao Long, Fangxun Zhong, Tak-Hong Cheung, Qi Dou, and Yunhui Liu. Au- tolaparo: A new dataset of integrated multi-tasks for image-guided surgical automation in laparoscopic hysterectomy. In International Conference on Medical Image Computing and Computer-Assisted Interven- tion , pages 486-496. Springer, 2022. | | [70] | Hu Xu, Gargi Ghosh, Po-Yao Huang, Dmytro Okhonko, Armen Aghajanyan, Florian Metze, Luke Zettlemoyer, and Christoph Feichtenhofer. Videoclip: Contrastive pre-training for zero-shot video-text understanding. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing , pages 6787-6800, 2021. | | [71] | Zihui Sherry Xue and Kristen Grauman. Learning fine-grained view-invariant representations from unpaired ego-exo videos via temporal alignment. Advances in Neural Information Processing Systems , 36, 2024. | | [72] | Jiahui Yu, Zirui Wang, Vijay Vasudevan, Legg Yeung, Mojtaba Seyedhosseini, and Yonghui Wu. Coca: Contrastive captioners are image-text foundation models. arXiv preprint arXiv:2205.01917 , 2022. | | [73] | Qiying Yu, Quan Sun, Xiaosong Zhang, Yufeng Cui, Fan Zhang, Xinlong Wang, and Jingjing Liu. Capsfusion: Rethinking image-text data at scale. arXiv preprint arXiv:2310.20550 , 2023. | | [74] | Kun Yuan, Matthew Holden, Shijian Gao, and Won-Sook Lee. Surgical workflow anticipation using instrument interaction. In Medical Image Computing and Computer Assisted Intervention-MICCAI 2021: 24th International Conference, Strasbourg, France, September 27-October 1, 2021, Proceedings, Part IV 24 , pages 615-625. Springer, 2021. | | [75] | Kun Yuan, Vinkle Srivastav, Nassir Navab, and Nicolas Padoy. Hecvl: Hierarchical video-language pretraining for zero-shot surgical phase recognition. arXiv preprint arXiv:2405.10075 , 2024. | | [76] | Kun Yuan, Vinkle Srivastav, Tong Yu, Joel Lavanchy, Pietro Mascagni, Nassir Navab, and Nicolas Padoy. Learning multi-modal representations by watching hundreds of surgical video lectures. arXiv preprint arXiv:2307.15220 , 2023. | | [77] | Xin Yuan, Zhe Lin, Jason Kuen, Jianming Zhang, Yilin Wang, Michael Maire, Ajinkya Kale, and Baldo Faieta. Multimodal contrastive training for visual representation learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages 6995-7004, 2021. | | [78] | Rowan Zellers, Ximing Lu, Jack Hessel, Youngjae Yu, Jae Sung Park, Jize Cao, Ali Farhadi, and Yejin Choi. Merlot: Multimodal neural script knowledge models. Advances in Neural Information Processing Systems , 34:23634-23651, 2021. | | [79] | Bowen Zhang, Hexiang Hu, and Fei Sha. Cross-modal and hierarchical modeling of video and text. In Proceedings of the european conference on computer vision (ECCV) , pages 374-390, 2018. | | [80] | Honglu Zhou, Roberto Martín-Martín, Mubbasir Kapadia, Silvio Savarese, and Juan Carlos Niebles. Procedure-aware pretraining for instructional video understanding. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages 10727-10738, 2023. | | [81] | Luowei Zhou, Yingbo Zhou, Jason J Corso, Richard Socher, and Caiming Xiong. End-to-end dense video captioning with masked transformer. In Proceedings of the IEEE conference on computer vision and pattern recognition , pages 8739-8748, 2018. | | [82] | Xueyan Zou, Zi-Yi Dou, Jianwei Yang, Zhe Gan, Linjie Li, Chunyuan Li, Xiyang Dai, Harkirat Behl, Jianfeng Wang, Lu Yuan, et al. Generalized decoding for pixel, image, and language. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages 15116-15127, 2023. | | [83] | Xueyan Zou, Jianwei Yang, Hao Zhang, Feng Li, Linjie Li, Jianfeng Wang, Lijuan Wang, Jianfeng Gao, and Yong Jae Lee. Segment everything everywhere all at once. Advances in Neural Information Processing | Systems , 36, 2024. ## A Pretraining Dataset ## A.1 Videos We start with the videos that are used for surgical vision-language pretraining in [76]. In total, there are 1 326 , surgical lecture videos. These videos are transcribed by AWS [5] and Whisper [53] audio speech recognition (ASR) to obtain the corresponding narration texts. Furthermore, we curate the videos' metadata from the online platforms to obtain the extra keystep and abstract texts. In the phaseand video-level pretraining, we need parent- and child-level text correspondences, e.g., keystep and its corresponding narration texts, to perform procedure understanding. Therefore, we filter out the videos that do not have parent-child correspondences. In total, we have 1 007 , and 920 videos for phase- and video-level pretraining, respectively. ## A.2 Misspelling Error As the narration texts are generated from the audio using the ASR system, they usually contain many misspelling errors and fragment sentences. Therefore, we apply multiple preprocessing steps to clean the narration texts. We first built the vocabulary based on the textbook, surgical category labels, and definition words. Specifically, we refer to the academic papers, which define the surgical phases, to curate a list of definition words and build a vocabulary that contains the words of interest. We also parse and merge the words from the textbook. In total, we obtain a vocabulary of the size of 51 640 , words. Then, we use the built vocabulary along with the spell-checking algorithm 1 to correct the misspelling errors in narration texts. The algorithm utilizes Levenshtein Distance to identify words within 2 edit distances from the original. It then cross-references these permutations (insertions, deletions, replacements, and transpositions) with a word frequency list, prioritizing words with higher occurrence frequencies as potential correct results. ## B Evaluation Setup We provide a detailed description of the downstream tasks and their settings that we apply in the experiment. Surgical Phase Recognition. Surgical phase recognition is a proxy task to test the model's surgical scene understanding ability. It aims to classify the frame of surgical video into predefined classes (phases), requiring the model to understand the instrument and anatomy's presence and their interactions by extracting visual patterns from the surgical scene image. In this work, we ignore temporal modeling in surgical phase recognition as we focus on multi-modal representation learning. We consider phase recognition as a frame-wise image classification problem. In the surgical phase recognition task, we evaluate the model's performance based on the publicly available datasets, including Cholec80 [67], AutoLaparo [69] and MultiBypass [ ? ]. - · Zero-shot Evaluation. As the surgical phase labels are high-level definitions that can be decomposed into a few basic concepts, we manually construct the contextual prompts for phase labels, as shown in Tab. 5, Tab. 6 and Tab. 7. Our constructed prompts for the class names are built with the help of clinician's comments, considering the involved surgical instruments, anatomies, and events involved in a given surgical phase. - · Linear-probing Evaluation. For linear-probing evaluation on the surgical phase recognition downstream datasets, we keep the visual encoder frozen and train a linear classifier on the extracted features. We do not apply any image augmentation during the training. The learning rate is scaled linearly based on the actual batch size. The model is optimized using SGD optimizer with the learning rate as 0 001 . and weight decay parameter as 0 0005 . . We train the model for 40 epochs. We fit the model on the training and validation sets and report the performance on the separate test set. For the few-shot linear-probing evaluation, we adopt a k -percentage shot approach with a slight modification to accommodate the nature of surgical videos, which contain frames from different classes. Specifically, we select 10% 1 https://github.com/barrust/pyspellchecker/ Table 5: Manually designed prompts for the class names to recognize the surgical phase in Cholec80 dataset. We decompose high-level phase definitions into a few basic concepts to form the text prompts. | Phase Labels | Prompts | |-------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------| | Preparation | In preparation phase I insert trocars to patient abdomen cavity | | CalotTriangleDissection | In calot triangle dissection phase I use grasper to hold gallbladder and use hook to expose the hepatic triangle area and cystic duct and cystic artery | | ClippingCutting | In clip and cut phase I use clipper to clip the cystic duct and artery then use scissor to cut them | | GallbladderDissection | In dissection phase I use the hook to dissect the connective tissue between gallbladder and liver | | GallbladderPacking | In packaging phase I put the gallbladder into the specimen bag | | CleaningCoagulation | In clean and coagulation phase I use suction and irrigation to clear the surgical field and coagulate bleeding vessels | | GallbladderRetraction | In retraction phase I grasp the specimen bag and remove it from trocar | Table 6: Manually designed prompts for the class names to recognize the surgical phase in AutoLaparo dataset. | Phase Labels | Prompts | |---------------------------------------|--------------------------------------------------| | Preparation | I use grasper to grasp and explore the field | | Dividing Ligament and Peritoneum | I divide ligament and peritoneum | | Dividing Uterine Vessels and Ligament | I divide uterine vessels and ligament | | Transecting the Vagina | I use the dissecting hook to transect the vagina | | Specimen Removal | I remove the specimen bag and uterus | | Suturing | I suture the tissue | | Washing | Washing | of the video from the training set. This ensures that data leakage is prevented and that the number of samples per class remains similar. Cross-modal Retrieval. Cross-modal retrieval includes text-based video retrieval and video-based text retrieval. Here, we conduct the cross-modal retrieval at three hierarchical levels. We collect 537 clip-narration (clip-level) video-text pairs, 746 phase-keystep (phase-level) video-text pairs, and 86 video-abstract (video-level) video-text pairs from hold-out testing videos of SVL [76]. There are more phase-keystep than clip-narration video-text pairs because some testing videos do not have cleaned narrations and we filter them out. For video embedding generation, we sample multiple frames fro m the video and average pool their image embeddings. We temporally sample 10 frames for clip-/phase-/video-level videos. We conduct the zero-shot evaluation for the cross-modal retrieval task. ## C Architecture &amp; Initialization As mentioned before, the current surgical vision-language pretraining dataset lacks the scale necessary to pretrain a robust vision-language model from scratch, therefore a good choice of architecture and initialization is important. In this section, we conduct the experiment and study the effect of different model architectures and initializations, justifying our choice of using ResNet50 architecture with ImageNet initialization as our starting point before the video-language pretraining. - · ResNet50. For ImageNet initialization, we use public IMAGENET1K\_V1 weights from torchvision. Random initialization means that we random initialize the visual encoder before the hierarchical vision-language pretraining. These models' textual encoders are initialized from BioClinicalBert [24]. For CLIP initialization, we initialize the visual and textual encoder from OpenAI's weight [52]. Table 7: Manually designed prompts for the class names to recognize the surgical phase in gastric bypass dataset. We use the same prompts for both StrasBypass70 and BernBypass70. We exclude the 'other' class as its definition is ambiguous. | Phase Labels | Prompts | |---------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Preparation | In preparation phase I insert trocars to the abdominal cavity and expose of the operating field | | Gastric pouch creation | I cut the fat tissue and open retrogastric window at stomach | | Omentum division | I grasp and lift the omentum and divide it | | Gastrojejunal anastomosis | I see the proximal jejunum and determine the length of the biliary limb. I open the distal jejunum and create the gastrojejunostomy using a stapler. I reinforcement of the gastrojejunostomy with an additional suture. | | Anastomosis test | I place the retractor and move the gastric tube and detect any leakage of the gastrojejunostomy | | Jejunal separation | I open the mesentery to facilitate the introduction of the stapler and transect the jejunum proximal | | Petersen space closure | I expose between the alimentary limb and the transverse colon and close it with sutures | | Jejunojejunal anastomosis | I expose between the alimentary limb and the transverse colon and close it with sutures | | Mesenteric defect closure | I expose the mesenteric defect and then close it by stitches | | Cleaning and coagulation | In clean and coagulation phase I use suction and irrigation to clear the surgical field and coagulate bleeding vessels | | Disassembling | I remove the instruments, retractor, ports, and camera | | Backbone | Init. | Zero-shot | Zero-shot | Linear-probing (10-shot) | Linear-probing (10-shot) | Linear-probing (full-shot) | Linear-probing (full-shot) | |------------|----------|-------------|-------------|----------------------------|----------------------------|------------------------------|------------------------------| | Backbone | Init. | Cholec80 | Autolaparo | Cholec80 | Autolaparo | Cholec80 | Autolaparo | | | Random | 29.4 / 10.4 | 15.3 / 10.9 | 42.4 / 22.1 | 33.4 / 20.2 | 44.6 / 25.3 | 30.7 / 19.3 | | | ImageNet | 34.7 / 24.4 | 21.3 / 16.6 | 55.0 / 39.9 | 48.5 / 32.0 | 63.5 / 50.3 | 54.3 / 41.8 | | | CLIP | 33.8 / 19.6 | 18.9 / 16.2 | 58.9 / 42.3 | 45.3 / 35.3 | 64.9 / 55.0 | 53.1 / 42.1 | | | Random | 20.2 / 11.5 | 9.1 / 8.3 | 38.4 / 20.9 | 32.1 / 19.7 | 48.2 / 25.9 | 38.4 / 25.5 | | | ImageNet | 42.8 / 25.1 | 20.5 / 15.5 | 57.4 / 40.5 | 47.8 / 31.9 | 60.6 / 48.9 | 56.3 / 44.5 | | | Dino | 35.1 / 19.1 | 13.9 / 9.2 | 54.7 / 39.2 | 47.4 / 31.1 | 64.9 / 51.2 | 54.0 / 42.4 | Table 8: The experiments show that the initialization largely influences the performance of surgical video-language pretraining. - · ViT-B/16. For ImageNet initialization, we use weights from the official Google JAX implementation, which is pretrained on ImageNet21k [56] and then finetune on ImageNet1k [59]. We use the public pretrained weights from [10] for Dino initialization. In our work, we choose ResNet50 over Vision Transformer (ViT-B/16) due to its superior performance and lower parameter amounts in the context of video-language pretraining for surgical data. Our experiments demonstrated that ResNet50, particularly when initialized with CLIP weights, outperformed ViT-B/16 across various tasks, including zero-shot and linear-probing evaluations on Cholec80 and Autolaparo datasets. Despite the advanced capabilities of vision transformers, their performance heavily depends on large-scale pretraining datasets, which might not always be available or optimal for specialized domains like surgical scenes. Conversely, convolutional neural networks like ResNet50 have shown robust generalization abilities, even when pretrained on natural images, making them more suitable for our specific application. Additionally, the initialization sensitivity observed in ViT-B/16 further justified our preference for ResNet50, ensuring a more reliable and effective starting point for our hierarchical vision-language pretraining. ## D Dynamic Time Warping After achieving the cost matrix C and ˆ C , we perform dynamic time warping (DTW) [60] to find the minimum cost path to align the frames of video segment V = { v , ...v 1 T } to the text sequence B = { b 1 , ...b N } and reversed text sequence { b N , ...b 1 } , respectively, as shown in Algorithm. 1. We follow [71] to process the DTW function into differentiable, enabling the gradient back-propagation. The differentiable loss function is the same as [21]. A significant advantage of using DTW is that it does not require additional temporal modules, such as recurrent neural networks or attention mechanisms, to model temporal relationships. This simplification allows us to focus on learning better representations by directly aligning video frames and text sequences based on their semantics. ## Algorithm 1 DTWto align sequences using cost matrix ``` 1: procedure ALIGNSEQUENCES( C, V, B ) 2: Let T be the length of sequence V and N be the length of sequence B . 3: Set i to T and j to N . 4: Initialize distance to 0 . 5: while i > 0 and j > 0 do 6: distance = distance + C i [ ][ j ] 7: if i > 1 and j > 1 and C i [ -1][ j -1] ≤ C i [ -1][ j ] and C i [ -1][ j -1] ≤ C i [ ][ j -1] then 8: i ← -i 1 9: j ← -j 1 10: else if i > 1 and C i [ -1][ j ] ≤ C i [ ][ j -1] then 11: i ← -i 1 12: else 13: j ← -j 1 14: end if 15: end while 16: return distance . 17: end procedure ``` ## E Modality Gap Modality gap is a geometric phenomenon observed in the embedding space of multi-modal models [39]. This gap illustrates that pretrained multi-modal (vision-language) models create a joint embedding space where different modalities, such as images and text, are kept at a significant distance from each other. During contrastive optimization, this separation created at initialization is maintained to the extent that irrelevant image embeddings can be closer to each other than to their corresponding relevant text embeddings. This spatial disparity in the embedding space hinders the model's ability to effectively align and understand the relationships between visual and textual data, leading to suboptimal performance in tasks requiring integrated multi-modal comprehension. The existence of the modality gap is particularly detrimental when adapting pretrained vision-language models to cross-modal generation tasks, such as image captioning. As highlighted by several studies [35, 20], narrowing modality gap correlates with improved performance in cross-modal tasks. As shown in Fig. 4, we visualize the embeddings of videos and their corresponding text descriptions at three hierarchical levels: clip-narration, phase-keystep, and video-abstract. Our proposed model demonstrates a significant reduction in the modality gap compared to the SurgVLP model. This alignment across different hierarchical levels ensures a more comprehensive and cohesive understanding of the multi-modal data, leading to superior performance in tasks like image captioning and other vision-language applications. Figure 4: Modality gap visualization in different hierarchical levels. It shows that our model closes the modality gap incurred from the initialization after the hierarchical pretraining. <!-- image --> ## F Surgical Phase Recognition Results We demonstrate the zero-shot surgical phase recognition to reflect the surgical scene understanding ability of our pretrained model. Our model can identify surgical phases of different types of surgical procedures without any finetuning. Both success and failure examples are shown. Surgical Term Understanding. In Fig. 5, we show that the pretrained model excels at identifying the 'washing' phase in surgical procedures, demonstrating its capability to accurately recognize high-level surgical activities. This proficiency enhances surgical assistance systems, improving real-time analysis and decision-making in operating rooms. Instrument Identification. In Fig. 6, we demonstrate how the visual embedding is significantly influenced by the presence of surgical instruments. Specifically, in the first row, the semantic meaning of the image changes from "calot triangle dissection" to "clip and cut" due to the appearance of a hook, even though the other anatomical features remain similar. ## G Limitations As the pretraining process at clip-level requires additional supervision signals, i.e., visual selfsupervision, the memory and computation overhead increase compared to the vanilla HecVL pretraining. Also, during the phase- and video-level pretraining, the process of dynamic time warping can be time-consuming because it is based on dynamic programming, slowing down the pretraining iteration when handling longer-term surgical videos. Additionally, the knowledge augmentation on keystep Figure 5: Qualitative surgical phase recognition results on hysterectomy. The y-axis is the class names. The x-axis is the probability of each class. The bottom right image shows that the pretrained model understands the blood fluid. <!-- image --> Figure 6: Qualitative surgical phase recognition results on cholecystectomy. The y-axis is the class names. The x-axis is the probability of each class. We find that the pretrained model is triggered by the instrument occurrence, such as hook in the second row. <!-- image --> and abstract texts need to be modified to fit the other video-language pretraining datasets [4, 79] as their hierarchical paired texts are annotated manually. Instead, our knowledge augmentation is more suitable for videos in the wild from online platforms. To address these limitations, future work could focus on developing a general textual augmentation strategy using the LLM's internal knowledge, adapting to the instructional videos that miss keystep and abstract text descriptions. Furthermore, techniques for decentralizing the video-language pretraining could be explored, aiming to pretrain with multi-centric vision-language samples while preserving privacy using the federated learning strategy. This could address the scaling problem in surgical vision-language pretraining and improve the generalizationability across the centers. ## H Knowledge Augmentation Build Surgical Knowledge Base. In Fig. 7, we show that the internal surgical knowledge of large language models can be elicited to build the external knowledge base. Build Surgical Knowledge Base. In Fig. 8, Fig. 9 and Fig. 10, we show that the knowledge of large language model can be used to enrich the semantics of the hierarchical texts, i.e., narrations, keysteps, and abstracts. Notably, it can explain high-level keystep words into descriptive sentences, enhancing textual diversity and preventing overfitting. ## Laparoscopic right colectomy for cecal cancer - Position the patient appropriately for a laparoscopic right colectomy: This usually involves placing the patient in the supine position. - Using a trocar; create an access point to the abdomen to allow for the insertion of the laparoscope - Identify the medial approach and begin the mobilization of the mesentery: Carefully navigate the laparoscopic tool to disconnect the right side of the colon from the rest of the organ. - Through the laparoscope, verify the presence of the cecal cancer and its location on the right colon. - Roam around delicately to identify the key anatomic landmarks. This could include the superior mesenteric vein, ileocolic vessels, or the duodenum. - Cut the anastomosis stapler to release the healthy section of the right colon. - Complete the division of the mesentery intracorporeally: Separate the right colon from the rest of the bowel and carefully preserve the oncologic clearance\_ - Extract the resected right colon extracorporeally through a small suprapubic incision. Take caution to make as small an incision as possible to ensure minimal harm to the patient. - 10. After ensuring the anastomosis is secure and not leaking, remove the laparoscope Please note: This is a broad outline of the steps undertaken during a laparoscopic right colectomy for cecal cancer. The specific steps may vary based on surgeon's expertise, patient's anatomy; and clinical situation. - Complete the stapled anastomosis extracorporeally. Connect the healthy section of the colon back to the rest of the organ\_ ## Redo Nissen fundoplication with stapled-wedge Collis gastroplasty - Follow this by identifying the mechanism underlying the failure of the initial repair. - Start the procedure by down the previous fundoplication. taking - Perform an extensive mobilization of the esophagus through the hiatus to achieve an adequate length of intra-abdominal esophagus. - Following the gastroplasty; a 2.Scm of tension-free intra-abdominal esophagus should be achieved. - Despite the mobilization; if the esophagus remains too short, perform a Collis gastroplasty using the wedge gastrectomy technique over a 50 French bougie. - Repair the hiatus with interrupted non-absorbable sutures - Finally; perform a standard Nissen fundoplication. ## Stepwise approach for laparoscopic reversal of Hartmann's procedure Establish pneumoperitoneum via a Veress needle to inflate the abdomen, creating a space in which to work. - Position the patient on the operating table after administering general anesthesia to ensure patient comfort and positioning: - Insert three trocars (ports) into the patient's abdomen to allow for the passage of laparoscopic instruments\_ - Inspect the abdomen with a laparoscope to locate the previous colonic stump and assess adhesions and general abdominal conditions - Begin the process of adhesiolysis, involving the careful separation of adhesions between the abdominal wall and the colon. - Divide the colon intra-abdominally using a laparoscopic stapler; which seals off the colon and prevents leakage of bowel contents. - Proceed with the mobilization of the colon by carefully performing a medial-to-lateral dissection. - Identify the rectal stump and mobilize it within the pelvis in readiness for the reconnection of the bowel. - 10 Secure the anastomosis by placing sutures and applying surgical staples to ensure a secure connection with no leakage. - An anastomosis (connection) is created between the divided colon and the rectal stump, restoring intestinal continuity. - 11 Inspect the whole abdominal cavity visually with the laparoscope checking for any signs of bleeding, injury or any overlooked issue before ending the procedure - 13 Clean the surgical area thoroughly: - 12 The trocars are then removed, and the incisions sutured. The pneumoperitoneum is deflated. - 14 Dress the post-operative wounds correctly. ## Laparoscopic extraction of a CBD stone after failure of ERCP (duodenal perforation) - The surgical area is prepared and patient is positioned for laparoscopic common bile duct (CBD) exploration. - The bladder is reached and exposed utilizing laparoscopic tools. gall - Trocars are inserted at suitable locations in the abdominal region to carry out the procedure. - The cystic duct is identified through careful maneuvering with laparoscopic instruments \_ - In case of large bile duct stones which cannot be extracted through the cystic duct, a choledochotomy is performed. - trans-cystic approach is taken to explore the Common Bile Duct - The CBD stone is visually located using the laparoscopic camera - The stone is securely extracted from the body through the previously created trocar incisions. - Laparoscopic instruments are used to extract the stone from the Common Bile Duct. - 10. Once the stone is completely removed, the common bile duct and cystic duct are checked for any potential remaining stones or blockages. - 11 Procedure concludes with the removal of all laparoscopic tools and the closure of all incisions\_ Figure 7: Example of surgical step knowledge base based on the large language models. - Source: and this be for the so be cut the mesh just in the middle about seven centimeter link - Source: inferior epigastric vessel come from here - 2. Target: Select a mesh of appropriate dimensions that completely covers the hernia defect and extends at least 3 centimetres beyond the defect in all directions - Target: Utilize dissection instruments to make an opening between the preperitoneal space and the transversalis fascia for easy access to the inguinal region - Target: Utilize meticulous dissection techniques to divide the blood vessels close to the bowel, ensuring minimal damage to the surrounding area - Source: the plain zero be often very thickened in this inflammatory condition and capsule dissection must be perform in order to help we find the plain and continued dissection - Source: the sigmoid colon be now or most completely release from the lateral side wall - 9. Source: we can morgue correctly define the way to proceed with the dissection - Target: Identify and diagnose the patient with diverticulosis and chronic colo-vesical fistula - 10 Target: Proceed with a combination of lateral and medial approach for the mobilization of the mesocolon - 12. Target: Begin by positioning three ports (Smm, 12mm, 5mm) in the abdomen for laparoscopy - 11. Source: a percutaneous suture use a straight needle be insert in the epigastric region and pass towards the apex of the right carotid - 13. Source: middle colic vessel be clip and divide just above the body of the pancreas - 14. Target: Locate the line of demarcation for the resection, ensuring to capture all the polyps and the other lesion sites observed during the preoperative investigations Figure 8: Knowledge augmentation on the narration texts. - Source: Opening of lesser omentum - 3. Source: Start of gastric tubulization - Target: The lesser omentum, a fatty apron-like structure that covers the stomach and first part of the duodenum, is opened to allow access to the stomach - Target: At this step, the surgeon begins creating a tube-like shape from the remaining portion of the stomach, also known as gastric tubulization - 5. Source: End of tubulization - 6. Target: This is when the surgeon completes the tubulization process, finalizing the smaller; sleeve-like shape of the stomach - Source: Division of greater omentum - 8. Target: In this step, the surgeon divides the greater omentum, a large apron-like fold of visceral peritoneum that down from the stomach hangs - 9. Source: Jejunojejunostomy - 11. Source: Gastrojejunostomy - 10 Target: The surgeon creates an opening in the two loops with a cautery hook for passage of the linear stapler closes the opening using absorbable sutures and - 12 Target: The surgeon executes the gastrojejunostomy using a circular stapler; creating a connection between the stomach and jejunum - 14 Target: Towards the end, the surgeon closes Petersen's space, a potential space after Roux-en-Y gastric bypass, to prevent internal herniation - 13. Source: Closure of Petersen's defect - 15. Source: Anvil placement - 17 Source: Division of the ileocolic vessels - 16. Target: The end of a nasogastric tube, attached to the anvil, is passed down from the mouth into the stomach - 18. Target: The surgeon separates the blood vessels connected to the ileum and colon to prevent bleeding during the procedure - 19. Source: Preparing the anastomosis - 20. Target: The surgeon prepares for the anastomosis, or the surgical connection between two parts of the intestine Figure 9: Knowledge augmentation on the keystep texts. - Source: This edit of a live operation demonstrates the performance of a laparoscopic gastric bypass. It demonstrates nicely manoeuvres such as retrocolic placement of the Roux limb and hand-sewn gastrojejunal anastomosis - Source: This video shows the case of a female patient presenting with a low rectal cancer for which neoadjuvant therapy is used. The author performs a totally laparoscopic TME using a medial approach. A colorectal anastomosis without bowel protection is performed - Target: This video shows a laparoscopic gastric bypass surgery; focusing on stomach and duodenum procedures and bariatric surgery techniques for morbid obesity treatment. Main activities involve the retrocolic placement of the Roux limb and hand-sewn gastrojejunal anastomosis\_ demonstrate the techniques and maneuvers used during this surgery They - Target: This is a surgical lecture video on a laparoscopic low anterior resection with Total Mesorectal Excision (TME) and medial mobilization of the splenic flexure in a female patient. This procedure is utilized to treat a low rectal cancer and involves the use of a medial approach. The video details how to perform a colorectal anastomosis without bowel protection The procedure is entirely laparoscopic - Target: This educational video demonstrates a laparoscopic sleeve gastrectomy for a morbidly obese patient. The surgical procedure involves techniques such as the placement of trocars and the first of the linear stapler. It also addresses potential surgical pitfalls to ensure the adequate execution of the procedure and prevention of complications. The video highlights that oversewing of the staple line isn't performed during the procedure and also discusses the methods for thrombosis prophylaxis firing - Source: In this live educational video, Professor Himpens presents the case of a 34-year-old female patient (BMI of with history of morbid obesity since adolescence. She will undergo a laparoscopic sleeve gastrectomy (LSG). The preoperative work-up was normal. She had lost 2Kg six months before the procedure. Nowadays, laparoscopic sleeve gastrectomy (LSG) is one of the most commonly performed bariatric procedures. Surgical pitfalls are emphasized during the video to make sure that LSG is achieved adequately and to prevent any potential complications. In addition, trocars placement, location of the first firing of the linear stapler; the reasons why oversewing of the staple line is not performed, and thrombosis prophylaxis are also discussed during the procedure 41) - Source: Intrathoracic migration of the fundoplication is one of the most common causes of failure after antireflux surgery: When the patient develops symptoms related to the volume of intramediastinal hernia, the only option is to reoperate Such redos are complex and necessitate a thorough and painsta approach to the potential underlying mechanisms causing intrathoracic migration, namely the length of the esophagus and cruroplasty king - Source: This video demonstrates our transumbilical three-trocar technique for single incision total colectomy and partial proctectomy with intracorporeal side-to-end ileorectal anastomosis using standard laparoscopic instrumentation. The patient is a thin 19-year-old with a BMI of 19 presenting with familial adenomatous polyposis (FAP). The previous colonoscopy has shown 300 polyps in the colon and very few in the distal rectum. Conventional trocars (5mm, 1Omm, and 12mm) are used through a 3.Scm transumbilical incision. The ligation of the vessels is mostly carried out by the Ligasure-V vessel-sealing device using a medial-to-lateral approach. The specimen is extracted through the umbilical incision after removal of the 1Omm and 12mm cannulas. The ileorectal anastomosis is carried out intracorporeally using a double stapling technique boy - Target: This surgical video falls under the categories of stomach and duodenum, hiatal hernia, reflux, Nissen fundoplication, and hernia surgery: The video demonstrates a reoperation for symptomatic intrathoracic migration of a fundoplication; involving valve repositioning and reinforced crural repair: The principal activities consist of examining the underlying mechanisms causing intrathoracic migration such as the length of the esophagus and cruroplasty - 10 Target: The video shows a transumbilical single incision laparoscopic total colectomy and partial proctectomy with ileorectal anastomosis performed on a 19-year-old patient with familial adenomatous polyposis. The surgery primarily uses a three-trocar technique and standard laparoscopic instruments including Ligasure-V vessel-sealing device for ligating vessels. The surgery involves making a 3.Scm transumbilical incision using 5mm, 1Omm, and 12mm trocars. The colectomy specimen is extracted through the same umbilical incision. The final ileorectal anastomosis is achieved intracorporeally employing a double stapling method Figure 10: Knowledge augmentation on the abstract texts. ## NeurIPS Paper Checklist ## 1. Claims Question: Do the main claims made in the abstract and introduction accurately reflect the paper's contributions and scope? Answer: [Yes] Justification: Our experimental results on multiple datasets are consistent with the claims in the abstract and introduction. Guidelines: - · The answer NA means that the abstract and introduction do not include the claims made in the paper. - · The abstract and/or introduction should clearly state the claims made, including the contributions made in the paper and important assumptions and limitations. A No or NA answer to this question will not be perceived well by the reviewers. - · The claims made should match theoretical and experimental results, and reflect how much the results can be expected to generalize to other settings. - · It is fine to include aspirational goals as motivation as long as it is clear that these goals are not attained by the paper. ## 2. Limitations Question: Does the paper discuss the limitations of the work performed by the authors? Answer: [Yes] Justification: We discuss the limitation in the Appendix G. ## Guidelines: - · The answer NA means that the paper has no limitation while the answer No means that the paper has limitations, but those are not discussed in the paper. - · The authors are encouraged to create a separate "Limitations" section in their paper. - · The paper should point out any strong assumptions and how robust the results are to violations of these assumptions (e.g., independence assumptions, noiseless settings, model well-specification, asymptotic approximations only holding locally). The authors should reflect on how these assumptions might be violated in practice and what the implications would be. - · The authors should reflect on the scope of the claims made, e.g., if the approach was only tested on a few datasets or with a few runs. In general, empirical results often depend on implicit assumptions, which should be articulated. - · The authors should reflect on the factors that influence the performance of the approach. For example, a facial recognition algorithm may perform poorly when image resolution is low or images are taken in low lighting. Or a speech-to-text system might not be used reliably to provide closed captions for online lectures because it fails to handle technical jargon. - · The authors should discuss the computational efficiency of the proposed algorithms and how they scale with dataset size. - · If applicable, the authors should discuss possible limitations of their approach to address problems of privacy and fairness. - · While the authors might fear that complete honesty about limitations might be used by reviewers as grounds for rejection, a worse outcome might be that reviewers discover limitations that aren't acknowledged in the paper. The authors should use their best judgment and recognize that individual actions in favor of transparency play an important role in developing norms that preserve the integrity of the community. Reviewers will be specifically instructed to not penalize honesty concerning limitations. ## 3. Theory Assumptions and Proofs Question: For each theoretical result, does the paper provide the full set of assumptions and a complete (and correct) proof? Answer: [NA] Justification: We do not include the theoretical assumption and experiments. ## Guidelines: - · The answer NA means that the paper does not include theoretical results. - · All the theorems, formulas, and proofs in the paper should be numbered and crossreferenced. - · All assumptions should be clearly stated or referenced in the statement of any theorems. - · The proofs can either appear in the main paper or the supplemental material, but if they appear in the supplemental material, the authors are encouraged to provide a short proof sketch to provide intuition. - · Inversely, any informal proof provided in the core of the paper should be complemented by formal proofs provided in appendix or supplemental material. - · Theorems and Lemmas that the proof relies upon should be properly referenced. ## 4. Experimental Result Reproducibility Question: Does the paper fully disclose all the information needed to reproduce the main experimental results of the paper to the extent that it affects the main claims and/or conclusions of the paper (regardless of whether the code and data are provided or not)? Answer: [Yes] Justification: We provide the detailed experiment setup in the Experiments section and appendix. Our model is evaluated on the public dataset. We will also provide the model weights and config file to reproduce the results. ## Guidelines: - · The answer NA means that the paper does not include experiments. - · If the paper includes experiments, a No answer to this question will not be perceived well by the reviewers: Making the paper reproducible is important, regardless of whether the code and data are provided or not. - · If the contribution is a dataset and/or model, the authors should describe the steps taken to make their results reproducible or verifiable. - · Depending on the contribution, reproducibility can be accomplished in various ways. For example, if the contribution is a novel architecture, describing the architecture fully might suffice, or if the contribution is a specific model and empirical evaluation, it may be necessary to either make it possible for others to replicate the model with the same dataset, or provide access to the model. In general. releasing code and data is often one good way to accomplish this, but reproducibility can also be provided via detailed instructions for how to replicate the results, access to a hosted model (e.g., in the case of a large language model), releasing of a model checkpoint, or other means that are appropriate to the research performed. - · While NeurIPS does not require releasing code, the conference does require all submissions to provide some reasonable avenue for reproducibility, which may depend on the nature of the contribution. For example - (a) If the contribution is primarily a new algorithm, the paper should make it clear how to reproduce that algorithm. - (b) If the contribution is primarily a new model architecture, the paper should describe the architecture clearly and fully. - (c) If the contribution is a new model (e.g., a large language model), then there should either be a way to access this model for reproducing the results or a way to reproduce the model (e.g., with an open-source dataset or instructions for how to construct the dataset). - (d) We recognize that reproducibility may be tricky in some cases, in which case authors are welcome to describe the particular way they provide for reproducibility. In the case of closed-source models, it may be that access to the model is limited in some way (e.g., to registered users), but it should be possible for other researchers to have some path to reproducing or verifying the results. ## 5. Open access to data and code Question: Does the paper provide open access to the data and code, with sufficient instructions to faithfully reproduce the main experimental results, as described in supplemental material? Answer: [No] Justification: We are working on cleaning the code for now as it is a systematic codebase that is related to multiple research works. We will release the code after the acceptance. ## Guidelines: - · The answer NA means that paper does not include experiments requiring code. - · Please see the NeurIPS code and data submission guidelines ( https://nips.cc/ public/guides/CodeSubmissionPolicy ) for more details. - · While we encourage the release of code and data, we understand that this might not be possible, so 'No' is an acceptable answer. Papers cannot be rejected simply for not including code, unless this is central to the contribution (e.g., for a new open-source benchmark). - · The instructions should contain the exact command and environment needed to run to reproduce the results. See the NeurIPS code and data submission guidelines ( https: //nips.cc/public/guides/CodeSubmissionPolicy ) for more details. - · The authors should provide instructions on data access and preparation, including how to access the raw data, preprocessed data, intermediate data, and generated data, etc. - · The authors should provide scripts to reproduce all experimental results for the new proposed method and baselines. If only a subset of experiments are reproducible, they should state which ones are omitted from the script and why. - · At submission time, to preserve anonymity, the authors should release anonymized versions (if applicable). - · Providing as much information as possible in supplemental material (appended to the paper) is recommended, but including URLs to data and code is permitted. ## 6. Experimental Setting/Details Question: Does the paper specify all the training and test details (e.g., data splits, hyperparameters, how they were chosen, type of optimizer, etc.) necessary to understand the results? Answer: [Yes] Justification: See Experiments section. ## Guidelines: - · The answer NA means that the paper does not include experiments. - · The experimental setting should be presented in the core of the paper to a level of detail that is necessary to appreciate the results and make sense of them. - · The full details can be provided either with the code, in appendix, or as supplemental material. ## 7. Experiment Statistical Significance Question: Does the paper report error bars suitably and correctly defined or other appropriate information about the statistical significance of the experiments? Answer: [No] Justification: The video-language pretraining is too computationally expensive to provide the error bar. We fix the random seed for the reproducibility. ## Guidelines: - · The answer NA means that the paper does not include experiments. - · The authors should answer "Yes" if the results are accompanied by error bars, confidence intervals, or statistical significance tests, at least for the experiments that support the main claims of the paper. - · The factors of variability that the error bars are capturing should be clearly stated (for example, train/test split, initialization, random drawing of some parameter, or overall run with given experimental conditions). - · The method for calculating the error bars should be explained (closed form formula, call to a library function, bootstrap, etc.) - · The assumptions made should be given (e.g., Normally distributed errors). - · It should be clear whether the error bar is the standard deviation or the standard error of the mean. - · It is OK to report 1-sigma error bars, but one should state it. The authors should preferably report a 2-sigma error bar than state that they have a 96% CI, if the hypothesis of Normality of errors is not verified. - · For asymmetric distributions, the authors should be careful not to show in tables or figures symmetric error bars that would yield results that are out of range (e.g. negative error rates). - · If error bars are reported in tables or plots, The authors should explain in the text how they were calculated and reference the corresponding figures or tables in the text. ## 8. Experiments Compute Resources Question: For each experiment, does the paper provide sufficient information on the computer resources (type of compute workers, memory, time of execution) needed to reproduce the experiments? Answer: [Yes] Justification: See Experiment section. ## Guidelines: - · The answer NA means that the paper does not include experiments. - · The paper should indicate the type of compute workers CPU or GPU, internal cluster, or cloud provider, including relevant memory and storage. - · The paper should provide the amount of compute required for each of the individual experimental runs as well as estimate the total compute. - · The paper should disclose whether the full research project required more compute than the experiments reported in the paper (e.g., preliminary or failed experiments that didn't make it into the paper). ## 9. Code Of Ethics Question: Does the research conducted in the paper conform, in every respect, with the NeurIPS Code of Ethics https://neurips.cc/public/EthicsGuidelines ? Answer: [Yes] Justification: Our code follows the code of ethics. ## Guidelines: - · The answer NA means that the authors have not reviewed the NeurIPS Code of Ethics. - · If the authors answer No, they should explain the special circumstances that require a deviation from the Code of Ethics. - · The authors should make sure to preserve anonymity (e.g., if there is a special consideration due to laws or regulations in their jurisdiction). ## 10. Broader Impacts Question: Does the paper discuss both potential positive societal impacts and negative societal impacts of the work performed? Answer: [NA] Justification: The social impact is minor in our work as we create the dataset from the open educational platforms, which are open to any learner. ## Guidelines: - · The answer NA means that there is no societal impact of the work performed. - · If the authors answer NA or No, they should explain why their work has no societal impact or why the paper does not address societal impact. - · Examples of negative societal impacts include potential malicious or unintended uses (e.g., disinformation, generating fake profiles, surveillance), fairness considerations (e.g., deployment of technologies that could make decisions that unfairly impact specific groups), privacy considerations, and security considerations. - · The conference expects that many papers will be foundational research and not tied to particular applications, let alone deployments. However, if there is a direct path to any negative applications, the authors should point it out. For example, it is legitimate to point out that an improvement in the quality of generative models could be used to generate deepfakes for disinformation. On the other hand, it is not needed to point out that a generic algorithm for optimizing neural networks could enable people to train models that generate Deepfakes faster. - · The authors should consider possible harms that could arise when the technology is being used as intended and functioning correctly, harms that could arise when the technology is being used as intended but gives incorrect results, and harms following from (intentional or unintentional) misuse of the technology. - · If there are negative societal impacts, the authors could also discuss possible mitigation strategies (e.g., gated release of models, providing defenses in addition to attacks, mechanisms for monitoring misuse, mechanisms to monitor how a system learns from feedback over time, improving the efficiency and accessibility of ML). ## 11. Safeguards Question: Does the paper describe safeguards that have been put in place for responsible release of data or models that have a high risk for misuse (e.g., pretrained language models, image generators, or scraped datasets)? Answer: [NA] Justification: Unlike generative models, this work provides generalist scene understanding as the foundation module for surgical data science. The data are anonymized because of the model encoding. ## Guidelines: - · The answer NA means that the paper poses no such risks. - · Released models that have a high risk for misuse or dual-use should be released with necessary safeguards to allow for controlled use of the model, for example by requiring that users adhere to usage guidelines or restrictions to access the model or implementing safety filters. - · Datasets that have been scraped from the Internet could pose safety risks. The authors should describe how they avoided releasing unsafe images. - · We recognize that providing effective safeguards is challenging, and many papers do not require this, but we encourage authors to take this into account and make a best faith effort. ## 12. Licenses for existing assets Question: Are the creators or original owners of assets (e.g., code, data, models), used in the paper, properly credited and are the license and terms of use explicitly mentioned and properly respected? Answer: [Yes] Justification: We will cite the original owners' assets when we release the codebase. ## Guidelines: - · The answer NA means that the paper does not use existing assets. - · The authors should cite the original paper that produced the code package or dataset. - · The authors should state which version of the asset is used and, if possible, include a URL. - · The name of the license (e.g., CC-BY 4.0) should be included for each asset. - · For scraped data from a particular source (e.g., website), the copyright and terms of service of that source should be provided. - · If assets are released, the license, copyright information, and terms of use in the package should be provided. For popular datasets, paperswithcode.com/datasets has curated licenses for some datasets. Their licensing guide can help determine the license of a dataset. - · For existing datasets that are re-packaged, both the original license and the license of the derived asset (if it has changed) should be provided. - · If this information is not available online, the authors are encouraged to reach out to the asset's creators. ## 13. New Assets Question: Are new assets introduced in the paper well documented and is the documentation provided alongside the assets? Answer: [Yes] Justification: We will provide the documentation along with the code and dataset. ## Guidelines: - · The answer NA means that the paper does not release new assets. - · Researchers should communicate the details of the dataset/code/model as part of their submissions via structured templates. This includes details about training, license, limitations, etc. - · The paper should discuss whether and how consent was obtained from people whose asset is used. - · At submission time, remember to anonymize your assets (if applicable). You can either create an anonymized URL or include an anonymized zip file. ## 14. Crowdsourcing and Research with Human Subjects Question: For crowdsourcing experiments and research with human subjects, does the paper include the full text of instructions given to participants and screenshots, if applicable, as well as details about compensation (if any)? Answer: [NA] Justification: We do not involve crowd-sourcing and human subjects. ## Guidelines: - · The answer NA means that the paper does not involve crowdsourcing nor research with human subjects. - · Including this information in the supplemental material is fine, but if the main contribution of the paper involves human subjects, then as much detail as possible should be included in the main paper. - · According to the NeurIPS Code of Ethics, workers involved in data collection, curation, or other labor should be paid at least the minimum wage in the country of the data collector. ## 15. Institutional Review Board (IRB) Approvals or Equivalent for Research with Human Subjects Question: Does the paper describe potential risks incurred by study participants, whether such risks were disclosed to the subjects, and whether Institutional Review Board (IRB) approvals (or an equivalent approval/review based on the requirements of your country or institution) were obtained? Answer: [NA] Justification: We do not involve crowd-sourcing and human subjects. ## Guidelines: - · The answer NA means that the paper does not involve crowdsourcing nor research with human subjects. - · Depending on the country in which research is conducted, IRB approval (or equivalent) may be required for any human subjects research. If you obtained IRB approval, you should clearly state this in the paper. - · We recognize that the procedures for this may vary significantly between institutions and locations, and we expect authors to adhere to the NeurIPS Code of Ethics and the guidelines for their institution. - · For initial submissions, do not include any information that would break anonymity (if applicable), such as the institution conducting the review.
zuwLGhgxtQ
A Separation in Heavy-Tailed Sampling: Gaussian vs. Stable Oracles for Proximal Samplers
We study the complexity of heavy-tailed sampling and present a separation result in terms of obtaining high-accuracy versus low-accuracy guarantees i.e., samplers that require only $\mathcal{O}(\log(1/\varepsilon))$ versus $\Omega(\text{poly}(1/\varepsilon))$ iterations to output a sample which is $\varepsilon$-close to the target in $\chi^2$-divergence. Our results are presented for proximal samplers that are based on Gaussian versus stable oracles. We show that proximal samplers based on the Gaussian oracle have a fundamental barrier in that they necessarily achieve only low-accuracy guarantees when sampling from a class of heavy-tailed targets. In contrast, proximal samplers based on the stable oracle exhibit high-accuracy guarantees, thereby overcoming the aforementioned limitation. We also prove lower bounds for samplers under the stable oracle and show that our upper bounds cannot be fundamentally improved.
https://openreview.net/pdf/bd86dfe1f5fac662f55df1bccfbb1134cf9043ed.pdf
[ { "confidence": 3, "rating": 7, "review_id": "e2ERikvqJN", "review_text": "The paper investigates the complexity of sampling from heavy-tailed distributions and presents a distinction between obtaining high-accuracy and low-accuracy guarantees. It analyzes two types of proximal samplers: those based on Gaussian oracles and those based on stable oracles. The main findings are that Gaussian oracle-based samplers can only achieve low-accuracy guarantees when sampling from heavy-tailed distributions, while stable oracle-based samplers can achieve high-accuracy guarantees. Additionally, the paper establishes lower bounds for samplers using the stable oracle, indicating that the presented upper bounds are optimal and cannot be fundamentally improved.\n\n1. The problem is well-motivated and interesting. \n2. Designed the algorithms and derived the upper bounds and lower bounds for different settings. \n3. The authors also provided insightful discussion.\n4. The authors provided solid theoretical proof for the results.\n\nThere is no experiment to verify the theoretical findings.\n\n1. Can you give an example in the real-world to motivate your problem?\n2. Is it possible to run some experiments to verify your results?" }, { "confidence": 1, "rating": 7, "review_id": "K2UBZSWIwI", "review_text": "This paper studies the problem of heavy-tailed sampling. First, the paper shows that while the gaussian proximal samplers are efficient for light-tailed targets, they are not accurate for heavy-tailed ones; the paper develops a lower bounds for the Gaussian proximal samplers, which reveals a fundamental challenge in heavy-tailed settings.\n\nThen, the paper proceeds to develop a novel samplers based on restricted alpha-stable oracle; the insight is to replace the standard heat equation in gaussian oracle with a fractional heat flow. The paper proves that under suitable conditions the proposed sampler is efficient for heavy-tailed targets. Additionally, the paper proposes a practical implementation for a particular case of alpha=1.\n\n- Novel theoretical analysis for the gaussian oracle sampler, which provides a new insight to developing sampling algorithms\n\n- A novel methodology for heavy-tailed sampling\n\n- The paper is purely theoretical and lacks experimental evaluation; it would be nice to at least have a toy illustration for the implementable algorithm 2+3 in the alpha=1 case.\n\n- As the authors discussed in Sec5, the current paper does not present implementable algorithms for general alpha values in (0,2).\n\n- I wonder if the efficiency rejection sampling efficiency in Alg.3 has been taken into account of the sampler's theoretical complexity and practical complexity?\n\n- Maybe I am missing this -- what is the impact of alpha?" }, { "confidence": 1, "rating": 7, "review_id": "5Ofh7FZ5zb", "review_text": "The paper focus on studying the complexity of heavy-tailed sampling and present a separation result in terms of obtaining high-accuracy versus low-accuracy guarantees. Their results are presented for proximal samplers that are based on Gaussian versus stable oracles. Authors show that proximal samplers based on the Gaussian oracle have a fundamental barrier in that they necessarily achieve only low-accuracy guarantees when sampling from a class of heavy-tailed targets. In contrast, proximal samplers based on the stable oracle exhibit high-accuracy guarantees, thereby overcoming the aforementioned limitation. They also prove lower bounds for samplers under the stable oracle and show that our upper bounds cannot be fundamentally improved.\n\nAlthough I am not an expert in this field, I find this work quite interesting. The authors provide new material and support their statements with proofs.\n\nThe paper is not tested in any way on a numerical experiment. I am convinced that a paper presented at this type of conference should be both motivated by a real-world application and tested numerically, e.g., on a near-real-world formulation of the problem.\n\n**After a rebuttal process**, the authors agreed with this weakness and promised to add the experiments to the final version of the paper.\n\nN/A" }, { "confidence": 3, "rating": 8, "review_id": "uc6KPPkdL0", "review_text": "The authors provide a lower bound for sampling from heavy tailed distributions under the Gaussian oracle of order $O(\\textup{poly}(1/\\varepsilon))$. They then propose an alternative proximal sampling algorithm using the $\\alpha$-stable oracle that achieves a convergence rate of $O(\\log(1/\\varepsilon))$ for heavy-tailed distributions satisfying a fractional Poincare inequality. They then provide a practical implementation of the stable proximal sampler, and lower bounds on its convergence rate.\n\n- This work presents a very nice combination of results showing a separation in the performance of stable and Gaussian proximal samplers. The combination of lower and upper bounds separating the two methods makes the work a particularly interesting contribution.\n\n- The addition of a practical implementation of the stable proximal sampler is nice to have, demonstrating that it is viable in practice.\n\n- The work is generally clearly presented and the authors are clear about their contributions.\n\n- Overall, I consider this to be a very sound piece of theoretical work.\n\nI have no major concerns about this paper. The presentation is somewhat dense in places, though this is mostly just a consequence of it being a very technical paper and not a flaw as such. If the authors want to make the claim that practicioners should use the stable proximal sampler in applied settings, then they may want to provide empirical evidence of its performance compared to the Gaussian proximal sampler. However, I understand that this is not the main purpose of this theoretical paper.\n\nI have no clarifications to request." }, { "confidence": 2, "rating": 6, "review_id": "9OSFu4H7g1", "review_text": "This paper studies the complexity of sampling heavy-tailed distributions. It provides lower bounds on the complexity of Gaussian-based samplers for a class of heavy-tailed targets. Then, the paper constructs proximal samplers based on stable oracles, which improve the sampling complexity.\n\n* This paper is well-written. The background of sampling and the research problems regarding sampling complexity are clearly introduced. The contributions of the lower bound on Gaussian-based samplers for heavy-tailed targets and the improved complexity using stable oracles are clearly presented.\n* The paper is technically sound. The definitions and assumptions are discussed clearly, and the theoretical results are supported by proof sketches.\n\nThe contribution of the paper could be improved with empirical experiments to evaluate the sampling algorithms and their complexity.\n\n* Is there any intuition that a Gaussian-based sampler has lower accuracy for heavy-tailed targets than for non-heavy-tailed targets?\n* How would a Gaussian-based sampler compare with a stable oracle for not heavy-tailed targets?" } ]
## A Separation in Heavy-Tailed Sampling: Gaussian vs. Stable Oracles for Proximal Samplers Ye He Georgia Institute of Technology [email protected] Alireza Mousavi-Hosseini University of Toronto, and Vector Institute [email protected] Krishnakumar Balasubramanian University of California, Davis [email protected] ## Murat A. Erdogdu University of Toronto, and Vector Institute [email protected] ## Abstract We study the complexity of heavy-tailed sampling and present a separation result in terms of obtaining high-accuracy versus low-accuracy guarantees i.e., samplers that require only O (log(1 /ε )) versus Ω( poly (1 /ε )) iterations to output a sample which is ε -close to the target in χ 2 -divergence. Our results are presented for proximal samplers that are based on Gaussian versus stable oracles. We show that proximal samplers based on the Gaussian oracle have a fundamental barrier in that they necessarily achieve only low-accuracy guarantees when sampling from a class of heavy-tailed targets. In contrast, proximal samplers based on the stable oracle exhibit high-accuracy guarantees, thereby overcoming the aforementioned limitation. We also prove lower bounds for samplers under the stable oracle and show that our upper bounds cannot be fundamentally improved. ## 1 Introduction The task of sampling from heavy-tailed targets arises in various domains such as Bayesian statistics [GJPS08, GLM18], machine learning [CDV09, BZ17, N¸ SR19, SZTG20, DKTZ20], robust ¸ statistics [KN04, JR07, Kam18, YŁR22], multiple comparison procedures [GBH04, GB09], and study of geophysical systems [SP15, QM16, PBEM23]. This problem is particularly challenging when using gradient-based Markov Chain Monte Carlo (MCMC) algorithms due to diminishing gradients, which occurs when the tails of the target density decay at a slow (e.g. polynomial) rate. Indeed, canonical algorithms like Langevin Monte Carlo (LMC) have been empirically observed to perform poorly [LWME19, HMW21, HFBE24] when sampling from such heavy-tailed targets. Several approaches have been proposed in the literature to overcome these limitations of LMC and related algorithms. The predominant ones include (i) transformation-based approaches, where a diffeomorphic (invertible) transformation is used to first map the heavy-tailed density to a light-tailed one so that a light-tailed sampling algorithm can be used [JG12, YŁR22, HBE24], (ii) discretizing general Itô diffusions with non-standard Brownian motion that have heavy-tailed densities as their equilibrium density [EMS18, LWME19, HFBE24], and (iii) discretizing stable-driven stochastic differential equations [ZZ23]. However, the few theoretical results available on the analysis of algorithms based on approaches (i) and (ii) provide only low-accuracy heavy-tailed samplers; such algorithms require poly (1 /ε ) iterations to obtain a sample that is ε -close to the target in a reasonable metric of choice. Furthermore, quantitative complexity guarantees for the sampling approach used in (iii) are not yet available; thus, existing comparisons are mainly based on empirical studies. In stark contrast, when the target density is light-tailed it is well-known that algorithms like proximal samplers based on Gaussian oracles and the Metropolis Adjusted Langevin Algorithm (MALA) have high-accuracy guarantees; these algorithms require only polylog (1 /ε ) iterations to obtain a sample which is ε -close to the target in some metric. See, for example, the works by [DCWY19, LST21b, | | ν ≥ 1 | ν ≥ 1 | ν ∈ (0 , 1) | ν ∈ (0 , 1) | |------------|-------------------------|---------------------------|-------------------------|-----------------------------| | Oracle | Gaussian (Alg. 1) | Stable (Alg. 2 &3) | Gaussian (Alg. 1) | Stable (Alg. 2 &3) | | Complexity | ˜ Ω( ε - 1 ν ) (Cor. 2) | O (log( ε - 1 )) (Cor. 5) | ˜ Ω( ε - 1 ν ) (Cor. 2) | ˜ O ( ε - 1 ν +1 ) (Cor. 5) | Table 1: Separation for Proximal Samplers: Gaussian vs. practical Stable oracles ( α =1 ): Upper and lower iteration complexity bounds to generate an ε -accurate sample in χ 2 -divergence from the generalized Cauchy target densities with degrees of freedom ν , i.e. π ν ∝ (1 + | x | 2 ) -( d + ) 2 ν / . Here, ˜ Ω , ˜ O hide constants depending on ν and polylog ( d, 1 /ε ) . For the proximal sampler with a general α -Stable oracle (Algorithm 2), the upper bound for ν ∈ (0 1) , is O (log(1 /ε )) when α = ν . The lower bounds are from Corollary 2 via 2TV 2 ≤ χ 2 . WSC22a, CCSW22, CG23]. Specifically, [LST21b] analyzed the proximal sampling algorithm to sample from a class of strongly log-concave densities and obtained high-accuracy guarantees. [CCSW22] established similar high-accuracy guarantees for the proximal sampler to sample from target densities that satisfy a certain functional inequality, covering a range of light-tailed densities with exponentially fast tail decay (e.g. log-Sobolev and Poincaré inequalities). However, it is not clear if the proximal sampler achieves the same desirable performance when the target is not light-tailed. In light of existing results, in this work, we first consider the following question: - Q1. What are the fundamental limits of proximal samplers under the Gaussian oracle when sampling from heavy-tailed targets? To answer this question, we construct lower bounds showing that Gaussian-based samplers necessarily require poly (1 /ε ) iterations to sample from a class of heavy-tailed targets. These results complement the lower bounds on the complexity of sampling from heavy-tailed densities using the LMC algorithm established in [MHFH + 23]. With this lower bound in hand, we next consider the following question: - Q2. Is it possible to design high-accuracy samplers for heavy-tailed targets? We answer this in the affirmative by constructing proximal samplers that are based on stable oracles (see Definition 1 and Algorithm 2) by leveraging the fractional heat-flow corresponding to a class of stable-driven SDEs. We analyze the complexity of this algorithm when sampling from heavy-tailed densities that satisfy a fractional Poincaré inequality, and establish that they require only log (1 /ε ) iterations. Together, our answers to Q1 and Q2 provide a clear separation between samplers based on Gaussian and stable oracles. Our contributions can be summarized as follows. - · Lower bounds for the Gaussian oracle : In Section 2, we focus on Q1 and establish in Theorems 1 and 2 respectively that the Langevin diffusion and the proximal sampler based on the Gaussian oracle necessarily have a fundamental barrier when sampling from heavy-tailed densities. Our proof technique builds on [Hai10], and provides a novel perspective for obtaining algorithmdependent lower bounds for sampling, which may be of independent interest. - · A proximal sampler based on the stable oracle: In Section 3, we introduce a proximal sampler based on the α -stable oracle, which fundamentally relies on the exact implementations of the fractional heat flow that correspond to a stable-driven SDE. Here, the parameter α determines the allowed class of heavy-tailed targets which could be sampled with high-accuracy. In Theorem 3 and Proposition 1, we provide upper bounds on the iteration complexity that are of smaller order than the corresponding lower bounds established for the Gaussian oracle. We provide a rejectionsampling based implementation of the α -stable oracle for the case α = 1 and prove complexity upper bounds in Corollary 3. Finally, in Theorem 4, considering a sub-class of Cauchy-type targets, we prove lower bounds showing that our upper bounds cannot be fundamentally improved. An illustration of our results for Cauchy target densities, π ν ∝ (1 + | x | 2 ) -( d + ) 2 ν / where ν is the degrees of freedom, is provided in Table 1. We specifically consider the practical version of the stable proximal sampler with α = 1 (i.e., Algorithm 2 with the stable oracle implemented by Algorithm 3), and show that it always outperforms the Gaussian proximal sampler (Algorithm 1). Indeed, when ν ≥ 1 , the separation between these algorithms is obvious. In the case ν ∈ (0 1) , , Algorithm 2 &amp; 3 has a poly( 1 /ε ) complexity, nevertheless, it still improves the complexity of the Gaussian proximal sampler by a factor of ε . Wealso show via lower bounds (in Section 3.4) that the poly( 1 /ε ) complexity for Algorithm 2 &amp; 3, when ν ∈ (0 , 1) , can only be improved up to certain factors. We remark that for the ideal proximal sampler (Algorithm 2), the upper bound when ν ∈ (0 1) , is also O (log(1 /ε )) . These results demonstrate a clear separation between Gaussian and stable proximal samplers. Related works. We first discuss works analyzing the complexity of heavy-tailed sampling as characterized by a functional inequality assumption. [CDV09] analyzed the connection between sampling algorithms for a class of s -concave densities satisfying a certain isoperimetry condition related to weighted Poincaré inequalities. [HFBE24] undertook a mean-square analysis of discretization of a specific Itô diffusion that characterizes a class of heavy-tailed densities satisfying a weighted Poincaré inequality. [ALPW22] and [ALPW23] analyzed the complexity of pseudo-marginal MCMC algorithms and the random-walk Metropolis algorithm respectively, under weak Poincaré inequalities. As mentioned before, [MHFH + 23] showed lower bounds for the LMC algorithm when the target density satisfies a weak Poincaré inequality. [HBE24] and [YŁR22] analyzed a transformation based approach for heavy-tailed sampling under conditions closely related to the same functional inequality. This transformation methodology is also used to demonstrate asymptotic exponential ergodicity for other sampling algorithms like the bouncy particle sampler and the zig-zag sampler, in the heavytailed settings [DBCD19, DGM20, BRZ19]. These works provide only low-accuracy guarantees for heavy-tailed sampling and do not consider the use of weak Fractional Poincaré inequalities. Recent years have witnessed a significant focus on (strongly) log-concave sampling, leading to an extensive body of work that is challenging to encapsulate succinctly. In the context of (strongly) logconcave or light-tailed distributions, a plethora of non-asymptotic investigations have been conducted on LMC variations, including advanced integrators [SL19, LWME19, HBE20], underdamped LMC [CCBJ18, EGZ19, CLW23, DRD20], and MALA [DCWY19, LST20, CLA + 21, WSC22b]. Outside the realm of log-concavity, the dissipativity assumption, which regulates the growth of the potential, has been used in numerous studies to derive convergence guarantees [DM17, RRT17, EMS18, EH21, MFWB22, EHZ22, BCE + 22]. While research on upper bounds of sampling algorithms' complexity has advanced considerably, the exploration of lower bounds is still nascent. [CGL + 22] explored the query complexity of sampling from strongly log-concave distributions in one-dimensional settings. [LZT22] established lower bounds for LMC in sampling from strongly log-concave distributions. [CBL22] presented lower bounds for sampling from strongly log-concave distributions with noisy gradients. [GLL20] focused on lower bounds for estimating normalizing constants of log-concave densities. Contributions by [LST21a] and [WSC22b] provide lower bounds in the metropolized algorithm category, including Langevin and Hamiltonian Monte Carlo, in strongly log-concave contexts. Finally, [CGLL22] contributed to lower bounds in Fisher information for non-log-concave sampling. ## 2 Lower Bounds for Sampling with the Gaussian Oracle In this section, we focus on Q1 for both the Langevin diffusion (in continuous time) and the proximal sampler (in discrete time), where both procedures have the target density as their invariant measures. Our results below illustrate the limitation of the Gaussian oracle 1 for heavy-tailed sampling in both continuous and discrete time, showing that the phenomenon is not because of the discretization effect, but is inherently related to the use of Gaussian oracles. Langevin diffusion. We first start with the overdamped Langevin diffusion (LD): <!-- formula-not-decoded --> LD achieves high-accuracy 'sampling' in continuous time, i.e. a polylog(1 /ε ) convergence rate in the light-tailed setting. We make the following dissipativity-type assumption. Assumption 1. The target density is given by π X ( x ) ∝ exp( -V ( x )) , where V : R d → R satisfies <!-- formula-not-decoded --> Remark 1. The upper bound on ⟨ x, ∇ V ( x ) ⟩ ensures that V grows at most logarithmically in | x | . Consequently, π X is heavy-tailed and in fact does not satisfy a Poincaré inequality. The lower bound on ⟨ x, ∇ V ( x ) ⟩ is only needed for deriving the dimension dependency in our guarantees. If one is only interested in the ε dependency, this condition can be replaced with 0 ≤ ⟨ x, ∇ V ( x ) ⟩ . A classical example of a density satisfying the above assumption is the generalized Cauchy density with degrees of freedom ν = ν 1 = ν 2 &gt; 0 , where the potential is given by <!-- formula-not-decoded --> The following result, proved in Appendix A, provides a lower bound on the performance of LD. 1 Here, for the sake of unified presentation, we refer the use of Brownian motion in (LD) as Gaussian oracle. ## Algorithm 1 Gaussian Proximal Sampler [LST21b] Theorem 1. Suppose π X ∝ exp( -V ) satisfies Assumption 1. Let X t be the solution of the Langevin diffusion, and µ t := Law( X t ) . Then, for any δ &gt; 0 , <!-- formula-not-decoded --> where κ δ := 1 ∨ 2 d + ν 2 ∨ ν 2 (1+ ) δ ( d + ν 2 ) δ , C δ ( µ 0 ) := 1 d + ν 2 E [(1 + | X 0 | 2 ) γ ] 1 /γ with γ = κ δ ( d + ν 2 ) / 2 , and C ν ,ν 1 2 is a constant depending only on ν 1 and ν 2 . If we assume | X 0 | ≤ O ( √ d ) for simplicity, then by choosing δ = 2 ln ln t ν 2 ln t ∧ 2 ln ln d ( ν 2 -ν 1 ) ln d , we obtain <!-- formula-not-decoded --> Thus, LD requires at least T = ˜ Ω ν ,ν 1 2 ( d ν 1 -ν 2 ν 2 (1 /ε ) 2 /ν 2 ) to reach ε error in total variation. While this bound may be small in high dimensions when ν 2 &gt; ν 1 , for the canonical model of Cauchy-type potentials with ν 2 = ν 1 = ν , it will be independent of dimension, as stated by the following result. Note that Assumption 1 can also cover a general scaling by replacing | x | with c x | | for some constant c , which would introduce a multiplicative factor of 1 /c 2 for the lower bound on T . This is expected as e.g., mixing to the Gibbs potential c 2 | x | 2 can be faster than mixing to | x | 2 by a factor of 1 /c 2 . Corollary 1. Consider the generalized Cauchy density π X ν ∝ exp( -V ν ) where V ν is as in (1) . Let X t be the solution of the Langevin diffusion, and µ t := Law( X t ) . For simplicity, assume the initialization satisfies | X 0 | ≤ O ( √ d ) . Then, achieving TV( π X ν , µ T ) ≤ ε requires T ≥ ˜ Ω ν ( ε -2 ν ) . The above lower bound implies that LD is a low-accuracy 'sampler' for this target density in the sense that it depends polynomially on 1 /ε ; this dependence gets worse with smaller ν as the tails get heavier. It is worth highlighting the gap between the upper bound of [MHFH + 23, Corollary 8], which is ˜ O ( 1 /ε 4 /ν ) , and the lower bound in Corollary 1. Gaussian proximal sampler. In the remainder of this section, we prove that the Gaussian proximal sampler, described in Algorithm 1, also suffers from a poly(1 /ε ) rate when the target density is heavy-tailed. In each iteration of Algorithm 1, the first step involves sampling a standard Gaussian random variable y k centered at the current iterate x k with variance ηI ; this is a one-step isotropic Brownian random walk. Alternatively, since the Fokker-Planck equation of the standard Brownian motion is the classical heat equation, this step could also be interpreted as an exact simulation of the heat flow; see, for example, [CG03] and [Wib18]. Specifically, the density of y k is the solution to the heat flow at time η with the initial condition being the density of x k . The second step is called the restricted Gaussian oracle (RGO) as coined by [LST21b]; under which ( x , y k k ) is a reversible Markov chain whose stationary density has x -marginal π X . Assumption 2. For some ν 2 ≥ ν 1 ≥ 0 , the target π X ( x ) ∝ exp( -V ( x )) with V : R d → R satisfies <!-- formula-not-decoded --> The first condition above also appears in Assumption 1 and the second condition implies the upper bound of Assumption 1; thus, the above assumption is stronger. Note that the generalized Cauchy measure (1) satisfies this assumption with ν 1 = ν 2 = ν . Under Assumption 2, we state the following lower bound on the Gaussian proximal sampler and defer its proof to Appendix A. Theorem 2. Suppose π X ∝ exp( -V ) satisfies Assumption 2. Let x k denote the k th iterate of the Gaussian proximal sampler (Algorithm 1) with step η and let ρ X k := Law( x k ) . Then, for any δ &gt; 0 , <!-- formula-not-decoded --> where κ δ , C δ ( µ 0 ) , and C ν ,ν 1 2 are defined in Theorem 1. Above, assuming | X 0 | ≤ O ( √ d ) with the same choice of δ as in Theorem 1 yields TV( π X ν , ρ X k ) ≥ ˜ Ω ν ,ν 1 2 ( d ν 1 -ν 2 2 ( kη ) -ν 2 2 ) . Note that in order for the RGO step to be efficiently implementable, we need to have a sufficiently small η . The state-of-the-art implementation of RGO requires a step size of order η = ˜ (1 O / Ld ( 1 / 2 )) when V has L -Lipschitz gradients [FYC23]. With this choice of step size, the above lower bound requires at least N = ˜ Ω ν ,ν 1 2 ( Ld 1 / 2+( ν 1 -ν 2 ) /ν 2 (1 /ε ) 2 /ν 2 ) iterations. The assumptions in Theorem 2 once again cover the canonical examples of generalized Cauchy densities, where we have L = d + ν , which simplifies the lower bound as follows. Corollary 2. Consider the generalized Cauchy density π X ν ∝ exp( -V ν ) where V ν is as in (1) . Let x k denote the k th iterate of the Gaussian proximal sampler, and define ρ X k := Law( x k ) , and choose the step size η = ˜ (1 O / Ld ( 1 / 2 )) . If we assume | X 0 | ≤ O ( √ d ) for simplicity, then achieving TV( π X ν , ρ X N ) ≤ ε requires N ≥ ˜ Ω ν ( d 3 2 ε -2 ν ) iterations. We emphasize that the above lower bound is of order poly (1 /ε ) as advertised. Thus, the RGO-based proximal sampler can only yield a low-accuracy guarantee in this setting. ## 3 Stable Proximal Sampler and the Restricted α -Stable Oracle Having characterized the limitations of Gaussian oracles for heavy-tailed sampling, thereby answering Q1 , in what follows, we will focus on Q2 and construct proximal samplers based on the α -stable oracle, and prove that they achieve high-accuracy guarantees when sampling from heavy-tailed targets. First, we provide a basic overview of α -stable processes and fractional heat flows. Isotropic α -stable process. For t ≥ 0 , let X ( α ) t be the isotropic stable Lévy process in R d , starting from x ∈ R d , with the index of stability α ∈ (0 2] , , defined uniquely via its characteristic function E x e i ⟨ ξ,X ( α ) t - ⟩ x = e - | | t ξ α . When α = 2 , X (2) t is a scaled Brownian motion, and when 0 &lt; α &lt; 2 , it becomes a pure Lévy jump process in R d . The transition density of X ( α ) t is then given by <!-- formula-not-decoded --> where the second equation above is the inverse Fourier transform of the characteristic function, thus returns the density. The transition kernel and the density in (2) have closed-form expressions for the special cases α = 1 2 , . In particular, when α = 1 , p (1) t reduces to a Cauchy density with degrees of freedom ν = 1 , i.e. p (1) t ( y ) ∝ | | ( y 2 + t 2 ) -( d +1) 2 / . We finally note that the isotropic stable Lévy process X ( α ) t displays self-similarity like the Brownian motion; the processes X ( α ) at and a 1 /α X ( α ) t have the same distribution. This property is crucial in the development of the stable proximal sampler. Fractional heat flow. The equation ∂ u t, x t ( ) = - -( ∆) α/ 2 u t, x ( ) with the condition u (0 , x ) = u 0 ( x ) is an extension of the classical heat flow, and is referred to as the fractional heat flow. Here, - -( ∆) α/ 2 is the fractional Laplacian operator with α ∈ (0 2] , , which is the infinitesimal generator of the isotropic α -stable process. For α = 2 , it reduces to the standard Laplacian operator ∆ . Stable proximal sampler. Let π x, y ( ) be a joint density such that π x, y ( ) ∝ π X ( x p ) ( α ) ( η x, y ; ) , where π X is the target and p ( α ) ( η x, y ; ) is the transition density of the α -stable process, introduced in (2). It is easy to verify that (i) the X -marginal of π is π X , (ii) the conditional density of Y given X is π Y X | ( ·| x ) = p ( α ) ( η x, ; · ) , (iii) the Y -marginal is π Y = π X ∗ p ( α ) η , i.e. π Y is obtained by evolving π X along the α -fractional heat flow for time η , and (iv) the conditional density of X given Y is π X Y | ( ·| y ) ∝ π X ( ) · p ( α ) ( η ; · , y ) . Based on these, we introduce the following stable oracle. Definition 1 (Restricted α -Stable Oracle) . Given y ∈ R d , an oracle that outputs a random vector distributed according to π X Y | ( ·| y ) , is called the Restricted α -Stable Oracle (R α SO). Note that when α = 2 , the R α SO reduces to the RGO of [LST21b]. The Stable Proximal Sampler (Algorithm 2) with parameter α is initialized at a point x 0 ∈ R d and performs Gibbs sampling on the joint density π . In each iteration, the first step involves sampling an isotropic α -stable random vector y k centered at the current iterate x k , which is a one-step isotropic α -stable random walk. This could also be interpreted as an exact simulation of the fractional heat flow. Indeed, due to the relation between the fractional heat flow and the isotropic stable process, the density of y k is exactly the solution to the α -fractional heat flow at time η with the initial condition being the density of x k . ## Algorithm 2 Stable Proximal Sampler with parameter α When α = 2 , the first step reduces to an isotropic Brownian random walk and a simulation of the classical heat flow. The second step calls the R α SO at the point y k . ## 3.1 Convergence guarantees We next provide convergence guarantees for the stable proximal sampler in χ 2 -divergence assuming access to the R α SO. Similar results for a practical implementation are presented in Section 3.2. To proceed, we introduce the fractional Poincaré inequality, first introduced in [WW15] to characterize a class of heavy-tailed densities including the canonical Cauchy class. Definition 2 (Fractional Poincaré Inequality) . For ϑ ∈ (0 , 2) , a probability density µ satisfies a ϑ -fractional Poincaré inequality (FPI) if there exists a positive constant C FPI ( ϑ ) such that for any function ϕ : R d → R in the domain of E ( ϑ ) µ , we have <!-- formula-not-decoded --> where E ( ϑ ) µ is a non-local Dirichlet form associated with µ defined as ̸ <!-- formula-not-decoded --> Remark 2. FPI is a weaker condition than Assumption 2. In fact, any density satisfying the first 2 conditions in Assumption 2 satisfies ϑ -FPI for all ϑ &lt; ν 1 [WW15, Theorem 1.1]. In Proposition 2, we show that as ϑ → 2 -, FPI becomes equivalent to the standard Poincaré inequality. In the sequel, ρ X k denotes the law of x k , ρ Y k denotes the law of y k , and ρ k = ρ X,Y k is the joint law of ( x , y k k ) . We provide the following convergence guarantee under an FPI, proved in Appendix B.2. Theorem 3. Assume that π X satisfies the α -FPI with parameter C FPI ( α ) for α ∈ (0 , 2) . For any step size η &gt; 0 and initial density ρ X 0 , the k th iterate of Algorithm 2, with parameter α , satisfies <!-- formula-not-decoded --> As a consequence of Remark 2 and Proposition 2, we recover the result in [CCSW22, Theorem 4], by letting α → 2 -. While our results in Theorem 3 are based on Algorithm 2 which requires exact calls to R α SO, the next result, proved in Appendix B.3, shows that even with an inexact implementation of R α SO, the error accumulation is at most linear, and Algorithm 2 still converges quickly. Proposition 1. Suppose the R α SO in Algorithm 2 is implemented inexactly, i.e. there exists a positive constant ε TV such that TV(˜ ρ X Y | k ( ·| y , ρ ) X Y | k ( ·| y )) ≤ ε TV for all y ∈ R d and k ≥ 1 , where ˜ ρ X Y | k ( ·| y ) is the density of the inexact R α SO sample conditioned on y . Let ˜ ρ X k be the density of the output of the k th step of Algorithm 2 with the inexact R α SO and ρ X k be the density of the output of k th step Algorithm 2 with the exact R α SO. Then, for all k ≥ 0 , <!-- formula-not-decoded --> Further, if ˜ ρ X 0 = ρ X 0 , for any K ≥ K 0 , we get TV(˜ ρ K X , π X ) ≤ ε , if ε TV ≤ ε/ 2 K , where the constant K 0 = (1 + C FPI ( α ) η -1 ) log ( χ 2 (˜ ρ X 0 | π X ) /ε 2 ) with C FPI ( α ) being the α -FPI parameter of π X . ## 3.2 A practical implementation of R α SO In the sequel, we introduce a practical implementation of R α SO when α = 1 . For this, we consider the case when the target density π X ∝ e -V satisfies the 1 -FPI with parameter C FPI (1) . A more thorough implementation of R α SO for other values of α will be investigated in future work. Assumption 3. There exist constants β, L &gt; 0 such that for any minimizer x ∗ ∈ arg min y ∈ R d V ( y ) and for all x ∈ R d , V satisfies V ( x ) -V ( x ∗ ) ≤ L x | -x ∗ | β . ## Algorithm 3 R SO Implementation for α α = 1 via Rejection Sampling | Input: V , x ∗ ∈ arg min V , η > 0 , y ∈ R d . while TRUE | // Rejection sampling | |-------------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------------| | Generate ( Z 1 , Z 2 , u ) ∼ N (0 , I d ) ⊗N (0 , 1) ⊗ U [0 , 1] x ← y + ηZ 1 / | Z 2 | return x if u ≤ exp( - V ( x ) + V ( x ∗ )) | // Cauchy random vector // Accept-reject step | Algorithm 3 provides an exact implementation of R α SO for α = 1 via rejection sampling. Inputs to this algorithm are the intermediate points y k in the stable proximal sampler (Algorithm 2). Note that Algorithm 3 requires a global minimizer of V , which is always assumed to exist, which guarantees that the acceptance probability is non-trivial. It generates proposals with density p (1) ( η ; · , y ) and utilizes that p (1) is a Cauchy density and Cauchy random vectors can be generated via ratios between a Gaussian random vector and square-root of a χ 2 random variable. Finally, the accept-reject step ensures that the output x has density π X Y | ( ·| y ) ∝ e -V p (1) ( η ; · , y ) . This makes Algorithm 3 a zeroth-order algorithm requiring only access to function evaluations of V . Under Assumption 3, by choosing a small step-size, we can control the expected number of rejections in Algorithm 3. We now state the iteration complexity of our stable proximal sampler with this R α SO implementation in the following result, whose proof is provided in Appendix B.3. Corollary 3. Assume V satisfies Assumption 3. If we choose the step-size η = Θ( d -1 2 L -1 β ) , then Algorithm 3 implements the R α SO with α = 1 , with the expected number of zeroth-order calls to V of order E [exp( L y | k | β )] . Further assume π X satisfies 1 -FPI with parameter C FPI (1) . Suppose we run Algorithm 2 with R α SO implemented for with α = 1 by Algorithm 3. Then, to return a sample which is ε -close in χ 2 -divergence to the target, the expected number of iterations required by Algorithm 2 is <!-- formula-not-decoded --> Note that the above result provides a high-accuracy guarantee for the implementable version of the stable proximal sampler (Algorithm 3) for a class of heavy-tailed targets, overcoming the fundamental barrier established in Theorem 2 for the Gaussian proximal sampler (i.e., Algorithm 1). A numerical illustration of this improvement is provided in Appendix D by sampling from student-t distributions. Remark 3. (1) Finding a global minimizer of the potential V can be hard, which could be avoided if a lower bound on the potential V is available; see Appendix B.3. (2) A trivial bound for E [exp( L y | k | β )] is exp( LM ) for M = E π X [ | X | β ] + χ 2 ( ρ X 0 | π X ) E π X [ | X | 2 β ] 1 2 . Since our main focus is high vs low accuracy samplers, deriving a sharper bound is beyond the scope of the current paper. ## 3.3 Illustrative examples To illustrate our results, we now apply the proximal algorithms to sample from Cauchy densities and discuss the complexity of both the ideal sampler (Algorithm 2) in which we can choose any α ∈ (0 , 2) and the implementable version with α = 1 (Algorithm 3). For the ideal sampler, we can choose α ≤ ν for any degrees of freedom ν &gt; 0 , and apply Theorem 3 since π ν satisfies a α -FPI [WW15]. Corollary 4. For any ν &gt; 0 , consider the generalized Cauchy target π ν ∝ exp( -V ν ) with V ν defined in (1) . For the stable proximal sampler with parameter α ∈ (0 2) , and α ≤ ν (i.e., Algorithm 2), suppose we set the step-size η ∈ (0 1) , and draw the initial sample from the standard Gaussian density. Then, the number of iterations required by Algorithm 2 to produce an ε -accurate sample in χ 2 -divergence is O ( C FPI ( α ) η -1 log( d/ε )) , where C FPI ( α ) is the α -FPI parameter of π ν . For the implementable sampler, since the parameter α is fixed to be 1 , whether a suitable FPI is satisfied or not depends on the degrees of freedom ν . Specifically, when ν ≥ 1 , 1 -FPI is satisfied and Corollary 5 applies. When ν ∈ (0 , 1) , on the other hand, 1 -FPI is not satisfied. To tackle this issue, we prove convergence guarantees for the proximal sampler under a weak fractional Poincaré inequality; the next corollary, proved in Appendix B.4, summarizes these results. Corollary 5. For the Cauchy target π ν ∝ exp( -V ν ) where V ν is defined in (1) , we consider Algorithm 2 with α = 1 , a standard Gaussian initialization, and R α SO implemented by Algorithm 3. - (1) When ν ≥ 1 , if we set the step-size η = Θ ( d -1 2 ( d + ν ) -4 ) , the expected number iterations required by Algorithm 2 to output a sample which is ε -close in χ 2 -divergence to the target is of order O ( C FPI (1) d 1 2 ( d + ν ) 4 log( d/ε ) ) , where C FPI (1) is the 1 -FPI parameter of π ν . (2) When ν ∈ (0 1) , , if we set the step-size η = Θ ( d -1 2 ( d + ν ) -4 ν ) , the expected number of iterations required by Algorithm 2, to output a sample which is ε -close in χ 2 -divergence to the target is of order ˜ O ( max { c 1 ν d 1 2 ν + 4 ν 2 , cd 1 2 + 4 ν ε -1 ν +1 }) , where c is the positive constant given in (16) . Here, ˜ O hides the polylog factors on d and 1 /ε . The stable proximal sampler (Algorithm 2) is a high accuracy sampler for the class of generalized Cauchy targets, as long as α ≤ ν , meaning that it achieves log( 1 /ε ) iteration complexity. The improvement from poly( 1 /ε ) to log( 1 /ε ) separates the stable proximal sampler and the Gaussian proximal sampler in the task of heavy-tailed sampling. When we use the rejection-sampling implementation with parameter α = 1 (Algorithm 3), iteration complexity goes through a phase transition as the tails get heavier. When the generalized Cauchy density has a finite mean ( ν &gt; 1 ), we achieve a high-accuracy sampler with log( 1 /ε ) iteration complexity. However, without a finite mean (i.e., ν ∈ (0 1) , ), the algorithm becomes a low-accuracy sampler with poly( 1 /ε ) complexity. Even in this low-accuracy regime, the implementable stable proximal sampler outperforms the Gaussian one, as originally highlighted in Table 1. Last, we claim that the poly( 1 /ε ) complexity of Algorithms 2 and 3 is not due to a loose analysis, as we show poly(1 /ε ) lower bounds in the following section. ## 3.4 Lower bounds for the stable proximal sampler Wenow study lower bounds on the stable proximal sampler to sample from the class of target densities satisfying Assumption 2, which includes the generalized Cauchy target. Recall that Assumption 2 implies the FPI used in Theorem 3. The result below, proved in Appendix C, complements Theorem 3, showing the impossibility of achieving log(1 /ε ) rates for a sufficiently large α . Theorem 4. Suppose π X ∝ exp( -V ) with V satisfying Assumption 2 and ν 2 ( d + ν 2 ) d + ν 1 &lt; α ≤ 2 . Let x k denote the k th iterate of Algorithm 2 with parameter α and step size η , and let ρ X k := Law( x k ) . Then for any τ ∈ ( ν 2 ( d + ν 2 ) d + ν 1 , α ) , and g d, ν ( 1 , ν 2 , τ ) = ν / 2 { τ ( d + ν 1 ) -ν 2 ( d + ν 2 ) } , we have <!-- formula-not-decoded --> where C ν ,ν 1 2 ,α is a constant depending only on ν , ν 1 2 , α , and m ( α ) τ is the τ th absolute moment of the α -stable random variable with density p ( α ) 1 defined in (2) . Remark 4. The parameter τ in Theorem 4 can be chosen arbitrarily close to α . Specifically, if we assume | X 0 | ≤ O ( √ d ) , then with the choice of τ = α -( log(log d ) log d ∧ log log( η -1 ) log( η -1 ) ) , we have <!-- formula-not-decoded --> where ˜ Ω hides polylog ( d/η ) factors. The τ th absolute moment of the α -stable random variable depends on the choice of α and the dimension d . It is hard to find an explicit formula of m ( α ) τ in general. An explicit formula is only available in some special cases, such as α = 1 2 , . Specializing Theorem 4 for the generalized Cauchy potential (i.e., ν 1 = ν 2 ) we obtain the following explicit result. Corollary 6. Let α ∈ (0 2] , . Suppose π ν ∝ exp( -V ν ) where V ν ( x ) is as in (1) for some ν ∈ (0 , α ) . Let ( x k ) k ≥ 0 be the output of Algorithm 2 with parameter α and step-size η &gt; 0 , and ρ X k := Law( x k ) for all k ≥ 0 . Then for any τ ∈ ( ν, α ) , <!-- formula-not-decoded --> where m ( α ) τ is the τ th absolute moment of the α -stable random variable with density p ( α ) 1 as in (2) . For the rejection sampling implementation in Algorithm 3, α = 1 and m (1) τ = Θ( d τ 2 ) for all τ &lt; 1 (see Appendix B.1). Notice that to implement the R α SO in the Stable proximal sampler efficiently, we need a sufficiently small step-size η . When the target potential satisfies Assumption 3, i.e. V is β -Hölder continuous with parameter L , we require η = Θ( d -1 2 L -1 β ) to ensure R α SO can be implemented with O (1) queries. Therefore, if we choose η = Θ( d -1 2 L -1 β ) , the minimum number of iterations we need to get an ε -error in TV is <!-- formula-not-decoded --> For the generalized Cauchy potential with ν ∈ (0 1) , , we have β = ν/ 4 and L = ( d + ν /ν ) , which leads to the following corollary. Corollary 7. Suppose π X ν ∝ exp( -V ν ) is the generalized Cauchy density with ν ∈ (0 1) , . Let x k denote the k -th iterate of the stable proximal sampler with α = 1 (Algorithm 3), and ρ X k := Law( x k ) . If we choose the step size η = Θ( L -4 ν d -1 2 ) where L = d + ν ν is the ν/ 4 -Hölder constant of V ν , and assume, for simplicity, | x 0 | ≤ O ( √ d ) , then, TV( π X ν , ρ X N ) ≤ ε requires N ≥ Ω ν,τ ( d τ +8 τ/ν 2+ τ ε -2( τ -ν ) ν (2+ τ ) ) , for any τ ∈ ( ν, 1) . Further, by choosing τ = max( ν, 1 -log(log( d/ε )) log( d/ε ) ) , we obtain <!-- formula-not-decoded --> The above result shows that when implementing the R α SO in Algorithm 2 with Algorithm 3, to sample from generalized Cauchy targets with ν ∈ (0 1) , , we can at best have an iteration complexity of order poly (1 /ε ) , matching the upper bounds in Corollary 5 up to certain factors. ## 4 Overview of Proof Techniques Lower bounds. We build on the techniques developed in [Hai10]. Let µ t denotes the law of LD along its trajectory. To proceed, we need some G : R d → R for which we can upper bound µ t ( G ) := ∫ G µ d t , and some f : R d → R that satisfies π X ( G ≥ y ) ≥ f ( y ) for all y ∈ R + . After finding the candidates G and f , Lemma 1 in Appendix A guarantees TV( π X , µ t ) ≥ sup y ∈ R + f ( y ) -µ t ( G /y. ) This technique relies on choosing G such that it has heavy tails under π X leading to a large f ( y ) , while having light tails along the trajectory, thus small µ t ( G ) . By picking G = exp( κV ) with κ ≥ 1 , one can immediately observe that π X ( G ) = ∞ , thus G indeed has heavy tails under π X . To control µ t ( G ) along the trajectory, one can use the generator of LD to bound ∂ µ t t ( G ) . Recall the generator of LD, L LD ( ) · = ∆( ) · - ⟨∇ V, ∇·⟩ . Therefore, with a choice of G = exp( κV ) , controlling ∂ µ t t ( G ) requires bounding the first and second derivatives of V . To avoid making extra assumptions for V in the analysis of LD, we instead construct G based on a surrogate potential ˜ ( V x ) = d + ν 2 2 ln(1 + | x | 2 ) , which is an upper bound to the potential V . We then estimate f based on this surrogate potential in Lemma 2, and control the growth of µ t ( G ) in Lemma 3. Combined with Lemma 1, this leads to the proof of Theorem 1, with the details provided in Appendix A. For the Gaussian proximal sampler, bounding ρ X k ( G ) requires controlling the expectation of G along the forward and backward heat flow. For the particular choice of G = exp( κV ) , we show in Lemma 4 that the growth of ρ X k ( G ) can be controlled only by considering a forward heat flow with the corresponding generator L HF = 1 2 ∆ . Therefore, given additional estimates on the second derivatives of V , we bound the growth of ρ X k ( G ) in Lemma 5. Once this bound is achieved, we can invoke Lemma 1 to finish the proof of Theorem 2. Upper bounds. Our upper bound analysis builds on that by [CCSW22] in the specific ways discussed next. We consider the change in χ 2 divergence when we apply the two operations to the law ρ X k to the iterates and the target π X : ( ) i evolving the two densities along the α -fractional heat flow for time η and ( ii ) applying the R α SO to the resulting densities. For the step (i), it is required to show that the solution along the fractional heat flow of the stable proximal sampler at any time, satisfies FPI. To show this, ( a ) the convolution property of the FPI is proved in Lemma 6, and ( b ) the FPI parameter for the stable process follows from [Cha04, Theorem 23]. In Proposition 3, it is then shown that the χ 2 divergence decays exponentially fast along the fractional heat flow under the assumption of FPI. The aforementioned results enable us to prove the exponential decay of χ 2 divergence along the fractional heat flow under FPI in Proposition 3. To deal with the step (ii) above, we use the data processing inequality; see Proposition 3. These two steps together, enable us to derive the stated upper bounds for the stable proximal sampler. ## 5 Discussion We showed the limitations of Gaussian proximal samplers for high-accuracy heavy-tailed sampling, and proposed and analyzed stable proximal samplers, establishing that they are indeed high-accuracy algorithms. We now list a few important limitations and problems for future research: (i) It is important to develop efficiently implementable versions of the stable proximal sampler for all values of α ∈ (0 2) , , and characterize their complexity in terms of problem parameters, (ii) Gaussian proximal samplers can be interpreted as a proximal point method for approximating the entropic regularized Wasserstein gradient flow of the KL objective [CCSW22]. This leads to the question, can we provide a variational intepreration of the stable proximal sampler? A potential approach is to leverage the results by [Erb14] on gradient flow interpretation of jump processes corresponding to the fractional heat equation, (iii) It is possible to use a non-standard Itô process in the proximal sampler (in place of the α -stable diffusion); see, for example, [EMS18, LWME19, HFBE24]. With this modification, it is interesting to examine the rates under weighted Poincaré inequalities that also characterize heavy-tailed densities. There are two difficulties to overcome here: ( a ) How to generate an exact non-standard Itô process? ( b ) How to implement the corresponding Restricted non-standard Gaussian Oracle, which requires the zeroth order information of the transition density of the Itô process? In certain cases, non-standard Itô diffusion can be interpreted as a Brownian motion on an embedded sub-manifold; thus, the approach in [GLL + 23] might be useful. ## Acknowledgements KB was supported in part by NSF grants DMS-2053918 and DMS-2413426. ## References | [ALPW22] | Christophe Andrieu, Anthony Lee, Sam Power, and Andi Q Wang, Comparison of Markov chains via weak Poincaré inequalities with application to pseudo-marginal MCMC , The Annals of Statistics 50 (2022), no. 6, 3592-3618. | |------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | [ALPW23] | , Weak Poincaré Inequalities for Markov chains: Theory and Applications , arXiv preprint arXiv:2312.11689 (2023). | | [App09] | David Applebaum, Lévy processes and stochastic calculus , Cambridge university press, 2009. | | [BCE + 22] | Krishnakumar Balasubramanian, Sinho Chewi, Murat A Erdogdu, Adil Salim, and Shunshi Zhang, Towards a theory of non-log-concave sampling: First-order station- arity guarantees for Langevin Monte Carlo , Conference on Learning Theory, PMLR, 2022, pp. 2896-2923. | | [BHJ08] | Krzysztof Bogdan, Wolfhard Hansen, and Tomasz Jakubowski, Time-dependent Schrödinger perturbations of transition densities , Studia Mathematica 189 (2008), no. 3, 235-254. | | [BRZ19] | Joris Bierkens, Gareth O Roberts, and Pierre-André Zitt, Ergodicity of the zigzag process , The Annals of Applied Probability 29 (2019), no. 4, 2266-2301. | | [BZ17] | Maria-Florina F Balcan and Hongyang Zhang, Sample and computationally efficient learning algorithms under s -concave distributions , Advances in Neural Information Processing Systems 30 (2017). | | [CBL22] | Niladri S Chatterji, Peter L Bartlett, and Philip M Long, Oracle lower bounds for stochastic gradient sampling algorithms , Bernoulli 28 (2022), no. 2, 1074-1092. | | [CCBJ18] | Xiang Cheng, Niladri S Chatterji, Peter L Bartlett, and Michael I Jordan, Underdamped Langevin MCMC: A non-asymptotic analysis , Conference on learning theory, PMLR, 2018, pp. 300-323. | | [CCSW22] | Yongxin Chen, Sinho Chewi, Adil Salim, and Andre Wibisono, Improved analysis for a proximal algorithm for sampling , Conference on Learning Theory, PMLR, 2022, pp. 2984-3014. | | [CDV09] | Karthekeyan Chandrasekaran, Amit Deshpande, and Santosh Vempala, Sampling s-concave functions: The limit of convexity based isoperimetry , International Work- shop on Approximation Algorithms for Combinatorial Optimization, Springer, 2009, pp. 420-433. | | [CG03] | Eric A Carlen and Wilfrid Gangbo, Constrained steepest descent in the 2-Wasserstein metric , Annals of mathematics (2003), 807-846. | | [CG23] | Yuansi Chen and Khashayar Gatmiry, A Simple Proof of the Mixing of Metropolis- Adjusted Langevin Algorithm under Smoothness and Isoperimetry , arXiv preprint arXiv:2304.04095 (2023). | |------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | [CGL + 22] | Sinho Chewi, Patrik R Gerber, Chen Lu, Thibaut Le Gouic, and Philippe Rigollet, The query complexity of sampling from strongly log-concave distributions in one dimension , Proceedings of Thirty Fifth Conference on Learning Theory, vol. 178, PMLR, 2022, pp. 2041-2059. | | [CGLL22] | Sinho Chewi, Patrik Gerber, Holden Lee, and Chen Lu, Fisher information lower bounds for sampling , arXiv preprint arXiv:2210.02482 (2022). | | [Cha04] | Djalil Chafaï, Entropies, convexity, and functional inequalities, on Φ -entropies and Φ -sobolev inequalities , Journal of Mathematics of Kyoto University 44 (2004), no. 2, 325-363. | | [CLA + 21] | Sinho Chewi, Chen Lu, Kwangjun Ahn, Xiang Cheng, Thibaut Le Gouic, and Philippe Rigollet, Optimal dimension dependence of the Metropolis-Adjusted Langevin Algo- rithm , Conference on Learning Theory, PMLR, 2021, pp. 1260-1300. | | [CLW23] | Yu Cao, Jianfeng Lu, and Lihan Wang, On explicit L 2 -convergence rate estimate for underdamped Langevin dynamics , Archive for Rational Mechanics and Analysis 247 (2023), no. 5, 90. | | [DBCD19] | George Deligiannidis, Alexandre Bouchard-Côté, and Arnaud Doucet, Exponential ergodicity of the Bouncy Particle Sampler , Annals of Statistics 47 (2019), no. 3. | | [DCWY19] | Raaz Dwivedi, Yuansi Chen, Martin J Wainwright, and Bin Yu, Log-concave sampling: Metropolis-Hastings algorithms are fast , Journal of Machine Learning Research 20 (2019), no. 183, 1-42. | | [DGM20] | Alain Durmus, Arnaud Guillin, and Pierre Monmarché, Geometric ergodicity of the Bouncy Particle Sampler , Annals of applied probability 30 (2020), no. 5, 2069-2098. | | [DKTZ20] | Ilias Diakonikolas, Vasilis Kontonis, Christos Tzamos, and Nikos Zarifis, Learning halfspaces with Massart noise under structured distributions , Conference on Learning Theory, PMLR, 2020, pp. 1486-1513. | | [DM17] | Alain Durmus and Éric Moulines, Nonasymptotic convergence analysis for the un- adjusted Langevin algorithm , The Annals of Applied Probability 27 (2017), no. 3, 1551-1587 (en). | | [DRD20] | Arnak S Dalalyan and Lionel Riou-Durand, On sampling from a log-concave density using kinetic Langevin diffusions , Bernoulli 26 (2020), no. 3, 1956-1988. | | [EGZ19] | Andreas Eberle, Arnaud Guillin, and Raphael Zimmer, Couplings and quantitative contraction rates for Langevin dynamics , The Annals of Probability 47 (2019), no. 4, 1982-2010. | | [EH21] | Murat A Erdogdu and Rasa Hosseinzadeh, On the convergence of Langevin Monte Carlo: The interplay between tail growth and smoothness , Conference on Learning Theory, PMLR, 2021, pp. 1776-1822. | | [EHZ22] | Murat A Erdogdu, Rasa Hosseinzadeh, and Shunshi Zhang, Convergence of Langevin Monte Carlo in chi-squared and Rényi divergence , International Conference on Artifi- cial Intelligence and Statistics, PMLR, 2022, pp. 8151-8175. | | [EMS18] | Murat A Erdogdu, Lester Mackey, and Ohad Shamir, Global non-convex optimization with discretized diffusions , Advances in Neural Information Processing Systems 31 (2018). | | [Erb14] | Matthias Erbar, Gradient flows of the entropy for jump processes , Annales de l'IHP Probabilités et statistiques, vol. 50, 2014, pp. 920-945. | | [FYC23] | Jiaojiao Fan, Bo Yuan, and Yongxin Chen, Improved dimension dependence of a proximal algorithm for sampling , arXiv preprint arXiv:2302.10081 (2023). | |------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | [GB09] | Alan Genz and Frank Bretz, Computation of multivariate normal and t-probabilities , vol. 195, Springer Science &Business Media, 2009. | | [GBH04] | Alan Genz, Frank Bretz, and Yosef Hochberg, Approximations to multivariate t inte- grals with application to multiple comparison procedures , Recent Developments in Multiple Comparison Procedures, Institute of Mathematical Statistics, 2004, pp. 24-32. | | [GJPS08] | Andrew Gelman, Aleks Jakulin, Maria Grazia Pittau, and Yu-Sung Su, A weakly informative default prior distribution for logistic and other regression models , The annals of applied statistics 2 (2008), no. 4, 1360-1383. | | [GLL20] | Rong Ge, Holden Lee, and Jianfeng Lu, Estimating normalizing constants for log- concave distributions: Algorithms and lower bounds , Proceedings of the 52nd Annual ACM SIGACT Symposium on Theory of Computing, 2020, pp. 579-586. | | [GLL + 23] | Sivakanth Gopi, Yin Tat Lee, Daogao Liu, Ruoqi Shen, and Kevin Tian, Algorithmic aspects of the log-Laplace transform and a non-Euclidean proximal sampler , arXiv preprint arXiv:2302.06085 (2023). | | [GLM18] | Joyee Ghosh, Yingbo Li, and Robin Mitra, On the use of Cauchy prior distributions for Bayesian logistic regression , Bayesian Analysis 13 (2018), no. 2, 359-383. | | [Hai10] | Martin Hairer, Convergence of Markov processes , Lecture notes (2010). | | [HBE20] | Ye He, Krishnakumar Balasubramanian, and Murat A Erdogdu, On the ergodicity, bias and asymptotic normality of randomized midpoint sampling method , Advances in Neural Information Processing Systems 33 (2020), 7366-7376. | | [HBE24] | , An analysis of Transformed Unadjusted Langevin Algorithm for Heavy-tailed Sampling , IEEE Transactions on Information Theory (2024). | | [HFBE24] | Ye He, Tyler Farghly, Krishnakumar Balasubramanian, and Murat A Erdogdu, Mean- square analysis of discretized Itô diffusions for heavy-tailed sampling , Journal of Machine Learning Research (to appear) (2024). | | [HMW21] | Lu-Jing Huang, Mateusz B Majka, and Jian Wang, Approximation of heavy-tailed distributions via stable-driven SDEs , Bernoulli 27 (2021), no. 3, 2040-2068. | | [JG12] | Leif T Johnson and Charles J Geyer, Variable transformation to obtain geometric ergodicity in the Random-Walk Metropolis algorithm , The Annals of Statistics 40 (2012), no. 6, 3050-3076. | | [JR07] | Søren Jarner and Gareth Roberts, Convergence of heavy-tailed Monte Carlo Markov Chain algorithms , Scandinavian Journal of Statistics 34 (2007), no. 4, 781-815. | | [Kam18] | Kengo Kamatani, Efficient strategy for the Markov chain Monte Carlo in high- dimension with heavy-tailed target probability distribution , Bernoulli 24 (2018), no. 4B, 3711-3750. | | [KN04] | Samuel Kotz and Saralees Nadarajah, Multivariate t-distributions and their applica- tions , Cambridge University Press, 2004. | | [Kwa17] | Mateusz Kwa´nicki, s Ten equivalent definitions of the fractional Laplace operator , Fractional Calculus and Applied Analysis 20 (2017), no. 1, 7-51. | | [LST20] | Yin Tat Lee, Ruoqi Shen, and Kevin Tian, Logsmooth gradient concentration and tighter runtimes for Metropolized Hamiltonian Monte Carlo , Conference on learning theory, PMLR, 2020, pp. 2565-2597. | | [LST21a] | , Lower bounds on Metropolized sampling methods for well-conditioned distri- butions , Advances in Neural Information Processing Systems 34 (2021), 18812-18824. | | [LST21b] | , Structured logconcave sampling with a Restricted Gaussian Oracle , Confer- ence on Learning Theory, PMLR, 2021, pp. 2993-3050. | |-------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | [LWME19] | Xuechen Li, Yi Wu, Lester Mackey, and Murat A Erdogdu, Stochastic Runge-Kutta accelerates Langevin Monte Carlo and beyond , Advances in neural information pro- cessing systems 32 (2019). | | [LZT22] | Ruilin Li, Hongyuan Zha, and Molei Tao, Sqrt(d) Dimension Dependence of Langevin Monte Carlo , The International Conference on Learning Representations, 2022. | | [MFWB22] | Wenlong Mou, Nicolas Flammarion, Martin J Wainwright, and Peter L Bartlett, Im- proved bounds for discretization of Langevin diffusions: Near-optimal rates without convexity , Bernoulli 28 (2022), no. 3, 1577-1601. | | [MHFH + 23] | Alireza Mousavi-Hosseini, Tyler K. Farghly, Ye He, Krishna Balasubramanian, and Murat A. Erdogdu, Towards a Complete Analysis of Langevin Monte Carlo: Beyond Poincaré Inequality , Proceedings of Thirty Sixth Conference on Learning Theory, vol. 195, 2023, pp. 1-35. | | [Nol20] | John P Nolan, Univariate stable distributions , Springer, 2020. | | [N¸R19] S | Than Huy Nguyen, Umut ¸im¸ekli, S s and Gaël Richard, Non-asymptotic analysis of Frac- tional Langevin Monte Carlo for non-convex optimization , International Conference on Machine Learning, 2019, pp. 4810-4819. | | [PBEM23] | Mathieu Le Provost, Ricardo Baptista, Jeff D Eldredge, and Youssef Marzouk, An adaptive ensemble filter for heavy-tailed distributions: Tuning-free inflation and local- ization , arXiv preprint arXiv:2310.08741 (2023). | | [QM16] | Di Qi and Andrew J Majda, Predicting fat-tailed intermittent probability distributions in passive scalar turbulence with imperfect models through empirical information theory , Communications in Mathematical Sciences 14 (2016), no. 6, 1687-1722. | | [RRT17] | Maxim Raginsky, Alexander Rakhlin, and Matus Telgarsky, Non-convex learning via stochastic gradient Langevin dynamics: A nonasymptotic analysis , Conference on Learning Theory, PMLR, 2017, pp. 1674-1703. | | [SL19] | Ruoqi Shen and Yin Tat Lee, The randomized midpoint method for log-concave sampling , Advances in Neural Information Processing Systems 32 (2019). | | [SP15] | Prashant D Sardeshmukh and Cécile Penland, Understanding the distinctively skewed and heavy tailed character of atmospheric and oceanic probability distributions , Chaos: An Interdisciplinary Journal of Nonlinear Science 25 (2015), no. 3. | | [SZTG20] ¸ | Umut ¸ Sim¸ekli, s Lingjiong Zhu, Yee Whye Teh, and Mert Gurbuzbalaban, Fractional underdamped Langevin dynamics: Retargeting SGD with momentum under heavy- tailed gradient noise , International Conference on Machine Learning, 2020, pp. 8970- 8980. | | [Wib18] | Andre Wibisono, Sampling as optimization in the space of measures: The Langevin dynamics as a composite optimization problem , Conference on Learning Theory, PMLR, 2018, pp. 2093-3027. | | [WSC22a] | Keru Wu, Scott Schmidler, and Yuansi Chen, Minimax mixing time of the Metropolis- adjusted Langevin algorithm for log-concave sampling , The Journal of Machine Learning Research 23 (2022), no. 1, 12348-12410. | | [WSC22b] | , Minimax Mixing Time of the Metropolis-Adjusted Langevin Algorithm for Log-Concave Sampling , Journal of Machine Learning Research 23 (2022), no. 270, 1-63. | | [WW15] | Feng-Yu Wang and Jian Wang, Functional inequalities for stable-like Dirichlet forms , Journal of Theoretical Probability 28 (2015), no. 2, 423-448. | | [YŁR22] | Jun Yang, Krzysztof Łatuszy´ski, n and Gareth Roberts, Stereographic Markov Chain Monte Carlo , arXiv preprint arXiv:2205.12112 (2022). | |-----------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------| | [ZZ23] | Xiaolong Zhang and Xicheng Zhang, Ergodicity of supercritical SDEs driven by α -stable processes and heavy-tailed sampling , Bernoulli 29 (2023), no. 3, 1933-1958. | ## A Lower Bound Proofs for the Langevin Diffusion and the Gaussian Proximal Sampler While research on upper bounds of sampling algorithms' complexity has advanced considerably, the exploration of lower bounds is still nascent. [CGL + 22] explored the query complexity of sampling from strongly log-concave distributions in one-dimensional settings. [LZT22] established lower bounds for LMC in sampling from strongly log-concave distributions. [CBL22] presented lower bounds for sampling from strongly log-concave distributions with noisy gradients. [GLL20] focused on lower bounds for estimating normalizing constants of log-concave densities. Contributions by [LST21a] and [WSC22b] provide lower bounds in the metropolized algorithm category, including Langevin and Hamiltonian Monte Carlo, in strongly log-concave contexts. Finally, [CGLL22] contributed to lower bounds in Fisher information for non-log-concave sampling. In what follows, we take a different approach and rely on the arguments developed in [Hai10]. We begin by stating the following result which drives our lower bound strategy. Lemma 1 ([Hai10, Theorem 5.1]) . Suppose µ and ν are probability measures on R d . Consider some G : R d → R + and f : R + → R + satisfying µ G ( ≥ y ) ≥ f ( y ) for all y ∈ R + . Then, <!-- formula-not-decoded --> ↦ In particular, suppose Id · f : R + ∋ y → yf y ( ) ∈ R + is a bijection, then <!-- formula-not-decoded --> for any m ≥ ∫ G ν d . Proof. By the definition of total variation and Markov's inequality, for any y &gt; 0 <!-- formula-not-decoded --> When Id · f is invertible, choosing y = (Id · f ) -1 (2 m ) implies yf y ( ) = 2 m and yields the desired result. To apply Lemma 1 when the target density satisfies Assumption 1, we need to establish tail lower bounds for this density, which we do so via the following lemma. In the following, let ω d := π d/ 2 Γ(( d +2) 2) / denote the volume of the unit d -ball. Lemma 2. Suppose π X ( x ) ∝ exp( -V ( x )) satisfies Assumption 1. Then, for all R &gt; 0 , <!-- formula-not-decoded --> When focusing on dependence on R and d , we obtain, <!-- formula-not-decoded --> where C ν 1 = 2 1 -ν 1 / 2 e -ν 1 (1+ ν 1 )Γ( ν / 1 2) . Proof. Without loss of generality assume V (0) = 0 . Via Assumption 1, we have the estimates for V , <!-- formula-not-decoded --> and similarly <!-- formula-not-decoded --> Consequently, using the spherical coordinates, <!-- formula-not-decoded --> Next, using the lower bound established on V and spherical coordinates, we obtain, <!-- formula-not-decoded --> where B denotes the beta function. Plugging back into our tail lower bound, we obtain, <!-- formula-not-decoded --> Moreover, by [MHFH + 23, Lemma 32] we have <!-- formula-not-decoded --> which completes the proof. Another element of Lemma 1 is controlling the growth of E [ G X ( t )] throughout the process. The following lemma achieves such control under the Langevin diffusion. Lemma 3. Suppose ( X t ) t ≥ 0 is the solution to the Langevin diffusion starting at X 0 with the corresponding potential V ( x ) satisfying Assumption 1. Let G x ( ) = exp( κV ˜ ( x )) where ˜ ( V x ) = d + ν 2 2 ln(1 + | x | 2 ) and κ ≥ 2 d + ν 2 ∨ 1 . Then, <!-- formula-not-decoded --> Proof. Recall the generator of the Langevin diffusion L · ( ) = ∆ · -⟨∇ V , ∇·⟩ . Then, <!-- formula-not-decoded --> Integrating the above inequality completes the proof. With the above lemmas in hand, we are ready to present the proof of Theorem 1. Proof of Theorem 1. To apply Lemma 1 we choose G x ( ) = exp( κV ˜ ( x )) where ˜ ( V x ) = d + ν 2 2 ln(1+ | x | 2 ) with κ ≥ 1 ∨ 2 d + ν 2 . By Lemma 2 we have <!-- formula-not-decoded --> Moreover, define <!-- formula-not-decoded --> with g (0) := E [ G X ( 0 )] . Then by Lemma 3 we have E [ G X ( t )] ≤ g t ( ) and we can invoke Lemma 1 to obtain <!-- formula-not-decoded --> <!-- formula-not-decoded --> <!-- formula-not-decoded --> where we used the fact that 1 + x ≤ e x for all x ∈ R and g t ( ) is non-decreasing in t . Choose <!-- formula-not-decoded --> for a sufficiently large constant C ′ ν ,ν 1 2 ≥ 1 . For simplicity, let <!-- formula-not-decoded --> and notice that <!-- formula-not-decoded --> Using the fact that <!-- formula-not-decoded --> we have <!-- formula-not-decoded --> <!-- formula-not-decoded --> where ˜ C ν ,ν 1 2 = C ν 1 e -1+ 2 ν /d 8 . By plugging in the value of y ∗ from (3), we obtain, <!-- formula-not-decoded --> <!-- formula-not-decoded --> Thus for sufficiently large C ′ ν ,ν 1 2 , there exists C ′′ ν ,ν 1 2 such that <!-- formula-not-decoded --> Choosing κ according to the statement of the theorem completes the proof. In order to prove a similar theorem for the Gaussian proximal sampler, we control the growth of E [ G x ( k )] for the iterates of the proximal sampler via the following lemmas. <!-- formula-not-decoded --> Lemma 4. Suppose ( x , y k k ) k are the iterates of the Gaussian proximal sampler with step size η and target density π X ∝ exp( -V ) for some V : R d → R . Let G x ( ) = exp( κV ( x )) with κ ≥ 1 . Then, for every k ≥ 0 , <!-- formula-not-decoded --> where z ∼ N (0 , I d ) is sampled independently from x k . <!-- formula-not-decoded --> <!-- formula-not-decoded --> where z 1 ∼ N (0 , I d ) . Furthermore, <!-- formula-not-decoded --> Therefore, <!-- formula-not-decoded --> Recall y k = x k + √ ηz 2 where z 2 ∼ N (0 , I d ) is independent from x k . By the towering property of conditional expectation, <!-- formula-not-decoded --> where z ∼ N (0 , I ) is independent from x d k , which completes the proof. In order to provide a more refined control over E [ G x ( k )] , we need additional assumptions on V . In particular, when considering the generalized Cauchy density, we arrive at the following lemma. Lemma 5. Suppose ( x , y k k ) k are the iterates of the Gaussian proximal sampler with step size η and target density π X ∝ exp( -V ) satisfies <!-- formula-not-decoded --> for all x ∈ R d . Let G x ( ) = exp( κV ( x )) with κ ≥ 1 ∨ 2 d + ν 2 . Then, for every k ≥ 0 , <!-- formula-not-decoded --> Proof. From Lemma 4, we have <!-- formula-not-decoded --> where z ∼ N (0 , I d ) is independent from x k . Consider the Brownian motion starting at x k , denoted by Z t = B t + x k where ( B t ) is a standard Brownian motion in R d . Notice that the generator for the process d Z t = d B t is L = 1 2 ∆ . Therefore, <!-- formula-not-decoded --> Integrating the above inequality yields [ 2 2 E G Z ( t )] κ d ( + 2 ) ν ≤ E [ G Z ( 0 )] κ d ( + 2 ) ν +2 ( κ d + ν 2 ) t. The proof is complete by noticing that Z 0 = x k and Z t = x k + √ 2 ηz for t = 2 η . Proof of Theorem 2. Notice that the statements of Lemmas 3 and 5 are virtually the same by changing t to 2 kη . Using this fact, the rest of the proof follows exactly the same as the proof of Theorem 1. ## B Proofs for the Stable Proximal Sampler ## B.1 Preliminaries In this section, we introduce additional preliminaries on the isotropic α -stable process, the fractional Poincaré-type inequalities, the fractional Laplacian and the fractional heat flow. The Lévy process is a stochastic process that is stochastically continuous with independent and stationary increments. Due to the stochastic continuity, the Lévy processes have càdlàg trajectories, which allows jumps in the paths. A Lévy process Y t is uniquely determined by a triple ( b, A, ν ) through the following Lévy-Khinchine formula: for all t ≥ 0 and ξ ∈ R d , <!-- formula-not-decoded --> where b ∈ R d is a drift vector. A ∈ R d × d is the covariance matrix of the Brownian motion in the Lévy-Itô decomposition[App09, Thereom 2.4.16] and ν is the Lévy measure related to the jump parts in the Lévy-Itô decomposition. The rotationally invariant(isotropic) stable process is a special case for the Le´y process when v b = 0 , A = 0 and ν is the measure given by <!-- formula-not-decoded --> Based on the Lévy-Khinchine formula (4), if we initialize the process at x ∈ R d , its characteristic function is given by <!-- formula-not-decoded --> The index of stability α ∈ (0 2] , determines the tail-heaviness of the densities: the smaller is α , the heavier is the tail. The parameter t in (6) measures the spread of X t around the center. When α = 2 , the stable process pertains to the Brownian motion running with a time clock twice as fast as the standard one and hence it has continuous paths. When α ∈ (0 2) , , the stable process paths contain discontinuities, which are often referred as jumps. At each fixed time, unlike the Brownian motion, the α -stable process density only has a finite p th -moment for p &lt; α , i.e. <!-- formula-not-decoded --> When d = 1 , the fractional absolute moment formula for m ( α ) p can be derived explicitly, see [Nol20, Chapter 3.7]. When d &gt; 1 , the explicit formula for m ( α ) p is only known in some special cases. For example, when α = 1 , m (1) p = Γ(( d + ) 2)Γ((1 p / -p / ) 2) Γ( d/ 2)Γ(1 / 2) for all p &lt; 1 . Another good property of α -stable process is the self-similarity. By examining the characteristic functions, it is easy to verify that the isotropic α -stable process is self-similar with the Hurst index 1 /α , i.e. X ( α ) at and a 1 /α X ( α ) t have the same distribution. Or equivalently, p ( α ) t ( x ) = t -d α p ( α ) 1 ( t -1 α x ) for all x ∈ R d and t &gt; 0 . The fractional Laplacian operator in R d of order α is denoted by - -( ∆) α/ 2 for α ∈ (0 , 2] . It was introduced as a non-local generalization of the Laplacian operator to model various physical phenomenons. In [Kwa17], ten equivalent definitions of the fractional Laplacian operator are introduced. Here we recall two of them: - (a) Distributional definition: For all Schwartz functions ϕ defined on R d , we have <!-- formula-not-decoded --> - (b) Singular integral definition: For a limit in the space L p ( R d ) , p ∈ [1 , ∞ ) , we have <!-- formula-not-decoded --> where B r is the unit ball with radius r centered at the origin. The fractional Laplacian can be understood as the infinitesimal generator of the stable Le´y process. v More explicitly, the semigroup defined by the transition probability p ( α ) t in (2) has the infinitesimal generator - -( ∆) α/ 2 , i.e. the density function p ( α ) t satisfies the following equation in the sense of distribution, [BHJ08]: <!-- formula-not-decoded --> (7) is usually referred as the α -fractional heat flow. When α = 2 , - -( ∆) α/ 2 is the Laplacian operator and (7) becomes the heat flow. Proposition 2 (From FPI to PI) . When ϑ → 2 -, the ϑ -FPI reduces to the classical Poincaré inequality with Dirichlet form E µ ( ϕ ) = ∫ |∇ ϕ x ( ) | 2 d x for any smooth bounded ϕ : R d → R d . Proof. It suffices to prove that E ( ϑ ) µ ( ϕ ) converges to E µ ( ϕ ) as ϑ → 2 -for any smooth function ϕ . Recall the definition of E ( ϑ ) µ ( ϕ ) : ̸ <!-- formula-not-decoded --> where c d,ϑ = O (2 -ϑ ) as ϑ → 2 -. Now we rewrite the inside integral in E ( ϑ ) µ ( ϕ ) and split the integral region into a centered unit ball, denoted as B 1 , and its complement: ̸ <!-- formula-not-decoded --> ̸ For I 2 , we have <!-- formula-not-decoded --> As a result, the term in E ( ϑ ) µ ( ϕ ) that is induced by I 2 satisfies <!-- formula-not-decoded --> For I 1 , we have when ϑ &gt; 1 , <!-- formula-not-decoded --> where ∥ ϕ ∥ C i ( R d ) := sup x ∈ R d | ϕ ( ) i ( x ) | for i = 1 2 , . As a result, the term in E ( ϑ ) µ ( ϕ ) that is induced by I 1 satisfies <!-- formula-not-decoded --> ̸ Therefore we have E ( ϑ ) µ ( ϕ ) → c d,ϑ ∫ R d ∫ B 1 |⟨∇ ϕ y ,z ( ) ⟩| 2 | z | d + ϑ µ y ( )d z d y as ϑ → 2 -. Last, we prove the limit is equivalent to 2 E µ ( ϕ ) . For i = j , we have <!-- formula-not-decoded --> ̸ where ˜ z k = z k for all k = j and ˜ z j = -z j . Therefore, ∫ B 1 ∂ ϕ y ∂ ϕ y z z i ( ) j ( ) i j d z = 0 . As a result, <!-- formula-not-decoded --> <!-- formula-not-decoded --> and the proof follows from c d,ϑ π d 2 (2 -ϑ )Γ( d 2 +1) → 2 as ϑ → 2 - . ## B.2 χ 2 convergence under FPI In this section, we study the decaying property of χ 2 -divergence from ρ X k to π X , where ρ X k is the law of x k . In the following analysis, we denote ρ k = ρ X,Y k as the law of ( x , y k k ) , ρ Y k the law of y k . We will analyze the two steps in the stable proximal sampler separately. Step 1. In the following proposition, we study the decay of χ 2 -divergence in step 1. Proposition 3. Assume that π X satisfies the α -FPI with parameter C FPI ( α ) , then for each k ≥ 0 , <!-- formula-not-decoded --> Proof of Proposition 3. For the simplicity of notations, we will write p ( α ) and p ( α ) t as p and p t respectively in this proof. Since x k ∼ ρ X k and y k | x k ∼ p η ( ; x, · ) , we have <!-- formula-not-decoded --> Therefore, we can view ρ Y k as ρ X k evolving along the following factional heat flow <!-- formula-not-decoded --> That is if ˜ ρ 0 = ρ X k , then ˜ ρ η = ρ Y k . Similarly, since π Y = π X ∗ p η , if ˜ ρ 0 = π X , then ˜ ρ η = π Y . For any t ∈ [0 , η ] , define π X t = π X ∗ p t and ρ X t = ρ X k ∗ p t . The derivative of ϕ -divergence from ρ X t to π X t can be calculated as <!-- formula-not-decoded --> where in the second identity we used the distributional definition of the fractional Laplacian. Next according to the singular integral definition of fractional Laplacian, we have <!-- formula-not-decoded --> where B r = { x ∈ R d : | x | ≤ } r and c d,α is given in (5). With (8), we have <!-- formula-not-decoded --> <!-- formula-not-decoded --> When ϕ r ( ) = ( r -1) 2 , ∫ R d ϕ ( ρ X t π X t ) π X t d x = χ 2 ( ρ X t | π X t ) and we have <!-- formula-not-decoded --> According to [Cha04, Theorem 23], p t satisfies α -FPI with parameter t for all t ∈ (0 , η ] . Since π X also satisfies the α -FPI with parameter C FPI ( α ) , Lemma 6 implies that π X t = π X ∗ p t satisfies the α -FPI with parameter C FPI ( α ) + η for all t ∈ (0 , η ] . Therefore we have <!-- formula-not-decoded --> Last, according to Gronwall's inequality we have <!-- formula-not-decoded --> Step 2. In this step, we study the decay of χ 2 -divergence in step 2. building on the work by [CCSW22]. According to the R α SO, we have ρ X k +1 ( x ) = ∫ R d π X Y | ( x y ρ | ) Y k ( y )d y . Also notice that π X ( x ) = ∫ R d π X Y | ( x y π | ) Y ( y )d y . According to the data processing inequalities, χ 2 divergence won't increase after step 2, i.e. χ 2 ( ρ X k +1 | π X ) ≤ χ 2 ( ρ Y k | π Y ) . Combining our results in Step 1 and Step 2 , we prove Theorem 3. Lemma 6. Let µ , µ 1 2 be two probability densities satisfying the ϑ -FPI with parameters C , C 1 2 respectively. Then µ 1 ∗ µ 2 satisfies the ϑ -FPI with parameter C 1 + C 2 . Proof of Lemma 6. Let X,Y be two independent random variables such that X ∼ µ 1 and Y ∼ µ 2 . Then X + Y ∼ µ 1 ∗ µ 2 . According to variance decomposition, we have for any function ϕ , <!-- formula-not-decoded --> Since X ∼ µ 1 and µ 1 satisfies the ϑ -FPI with parameter C 1 , we have ̸ <!-- formula-not-decoded --> therefore we have ̸ <!-- formula-not-decoded --> Since Y ∼ µ 2 and µ 2 satisfies the ϑ -FPI with parameter C 2 , we have Var ( E [ ϕ X ( + Y ) | Y ]) <!-- formula-not-decoded --> <!-- formula-not-decoded --> <!-- formula-not-decoded --> where the last inequality follows from Jensen's inequality. Combining (9) and (10), we have <!-- formula-not-decoded --> <!-- formula-not-decoded --> <!-- formula-not-decoded --> where the second inequality follows from Fatou's lemma. ## B.3 Implementation of the Stable Proximal Sampler In this section we discuss the implementation of the R α SO step in our stable proximal sampler. We introduce an exact implementation of the R α SO step without optimizing the target potential and the proofs for Corollary 3 and Proposition 1. Rejection sampling without optimization . Suppose a uniform lower bound of the target potential is known, i.e. there is a constant C Low such that inf x ∈ R d V ( x ) ≥ C Low &gt; -∞ , R SO at each step α can be implemented exactly via a rejection sampler with proposals ˜ x k +1 following p ( α ) η ( · -y k ) and the acceptance probability exp( -V (˜ x k +1 ) + C Low ) . Then the expected number of rejections, N , satisfies <!-- formula-not-decoded --> Without loss of generality, we assume x ∗ = 0 , which always hold if we translate the potential V by V (0) . Then we have <!-- formula-not-decoded --> <!-- formula-not-decoded --> <!-- formula-not-decoded --> <!-- formula-not-decoded --> where the second inequality follows from Assumption 3 and the last inequality follows from the proof of Corollary 3. With the above estimation, we can pick η = Θ( C 1 β Low d -1 2 L -1 β ) and the expected number of rejections satisfies log N = O ( C Low + LM ) with M = E π X [ | X | β ] + χ 2 ( ρ X 0 | π X ) E π X [ | X | 2 β ] 1 2 . Proof of Corollary 3. The expected number of iterations conditioned on y k in the rejection sampling is <!-- formula-not-decoded --> WLOG, assume x ∗ = 0 . Since V satisfies Assumption 3, we have <!-- formula-not-decoded --> Therefore, when η = Θ( d -1 2 L -1 β ) , the expected number of rejections N is of order E [exp( L y | k | β ] . Since π X satisfies a 1 -FPI with parameter C FPI (1) , according to [Cha04], p t satisfies the 1 -FPI with parameter η for any t ∈ (0 , η ) . Last it follows from Theorem 9 that for any η &gt; 0 , to achieve a ε -accuracy in χ 2 divergence, we need to perform the stable proximal sampler K steps with <!-- formula-not-decoded --> Proof of Proposition 1. For all k ≥ 0 , we have <!-- formula-not-decoded --> where the last two inequalities follow from the data processing inequality. Therefore, TV(˜ ρ X k , ρ X k ) ≤ kε TV +TV(˜ ρ X 0 , ρ X 0 ) for all k ≥ 1 . Next, the iteration complexity of Algorithm 2 with an inexact R α SO can be obtained from Proposition 1. Since ˜ ρ X 0 = ρ X 0 , according to Pinsker's inequality, we have <!-- formula-not-decoded --> For any ε &gt; 0 and any K satisfies <!-- formula-not-decoded --> if the R α SO can be implemented inexactly with ε TV ≤ ε 2 K , the density of the K th iterate of Algorithm 2 is ε -close to the target in the total variation distance, i.e. TV(˜ ρ K X , π X ) ≤ ε . ## B.4 Convergence under Weak Fractional Poincaré Inequality Our main result for Algorithm 2 in Theorem 3 is proved under the assumption the target satisfying α -FPI. Furthermore, for the rejection-sampling based implementation of the R α SO in Algorithm 3, the parameter α is set to be 1 . In order to use Theorem 3 for the case of generalized Cauchy targets, one has to check if the α -FPI is satisfied or not, which depends on the degrees of freedom parameter ν of the generalized Cauchy desity. Specifically, when ν ≥ 1 , 1 -FPI is satisfied and we hence have Corollary 5, part (i) based on Theorem 3. When ν ∈ (0 , 1) , 1 -FPI is not satisfied and hence Theorem 3 no longer applies. To tackle this issue, we now introduce a generalization of Theorem 3 to the case when the target satisfies a weak version of Fractioanl Poincaré inequality (wFPI) and provide convergence guarantees for the stable proximal sampler in χ 2 -divergence. Definition 3 (weak Fractional Poincaré Inequality) . For ϑ ∈ (0 , 2) , a probability density µ satisfies a ϑ -weak fractional Poincaré inequality if there exists a decreasing function β WFPI ( ϑ ) : R + → R + such that for any ϕ : R d → R in the domain of E ( ϑ ) µ with µ ϕ ( ) = 0 , we have <!-- formula-not-decoded --> where E ( ϑ ) µ is a non-local Dirichlet form associated with µ defined as ̸ <!-- formula-not-decoded --> The wFPI is satisfied by any probability density that is locally bounded, and is hence extremely general. Setting the parameter r = 0 , wFPI reduces to FPI with C FPI ( ϑ ) = β WFPI ( ϑ ) (0) . Theorem 5. Assume that π X satisfies the α -wFPI with parameter β WFPI ( α ) ( r ) for some α ∈ (0 2) , . Then for any step size η &gt; 0 and initial condition ρ X 0 such that R ∞ ( ρ X 0 | π X ) &lt; ∞ , the k th iterate of the stable proximal sampler with parameter α (Algorithm 2) satisfies <!-- formula-not-decoded --> The proof of Theorem 5 follows the same two-step analysis as it is introduced in the beginning of Section B.2. The convergence property corresponding to Step 1 is stated in the following Proposition. Proposition 4. Assume that π X satisfies the α -wFPI with parameter β WFPI ( α ) for some α ∈ (0 2) , , then for each k ≥ 0 , r &gt; 0 , <!-- formula-not-decoded --> Proof of Proposition 4. In the stable proximal sampler with parameter α , we have ρ Y k = ρ X k ∗ p ( α ) η and π Y = π X ∗ p ( α ) η . Therefore we can view ρ Y k and π Y as ρ X k and π X evolving along the fractional heat flow by time η respectively. For any t ∈ [0 , η ] , define π X t = π X ∗ p ( α ) t and ρ X t = ρ X k ∗ p ( α ) t . We have <!-- formula-not-decoded --> According to [Cha04, Theorem 23], p ( α ) t satisfies α -FPI with parameter η for all t ∈ (0 , η ] . According to Lemma 7, π X t satisfies the α -wFPI with β WFPI ( α ) ( r ) + η . Therefore we get <!-- formula-not-decoded --> <!-- formula-not-decoded --> where the last inequality follows from the definition of Renyi-divergence and the data processing inequality. Last, (11) follows from Gronwall's inequality. Proof of Theorem 5. According to Proposition 4, the χ 2 decaying property in step 1 of the algorithm is as follows, <!-- formula-not-decoded --> In step 2, we have ρ X k +1 = ρ Y k ∗ π X Y | and π X = π Y ∗ π X Y | . Therefore according to the data processing inequality, we get <!-- formula-not-decoded --> where the last inequality follows from the data processing inequality. Last, apply the above iterative relation k times and we prove (11). Lemma 7. Let µ 1 be a probability density on R d satisfying the ϑ -wFPI with parameter β WFPI ( ϑ ) ( r ) . Let µ 2 be a probability density on R d satisfying the ϑ -FPI with parameter C FPI ( ϑ ) . Then µ 1 ∗ µ 2 satisfies ϑ -wFPI with parameter β WFPI ( ϑ ) ( r ) + C FPI ( ϑ ) . Proof of Lemma 7. Let X,Y be two independent random variables such that X ∼ µ 2 and Y ∼ µ 1 . According to variance decomposition, we have for any function ϕ such that µ 1 ∗ µ 2 ( ϕ ) = 0 , <!-- formula-not-decoded --> Since X ∼ µ 2 and µ 2 satisfies the ϑ -FPI with parameter C FPI ( ϑ ) , we have <!-- formula-not-decoded --> ̸ <!-- formula-not-decoded --> Since Y ∼ µ 1 and µ 1 satisfies the ϑ -wFPI with parameter β WFPI ( ϑ ) , following the proof of Lemma 6, we have Var ( E [ ϕ X ( + Y ) | Y ]) <!-- formula-not-decoded --> <!-- formula-not-decoded --> <!-- formula-not-decoded --> where the last inequality follows from the fact that µ 1 ∗ µ 2 ( ϕ ) = 0 and the convexity ∥·∥ ∞ . Combining (12) and (14), we have <!-- formula-not-decoded --> <!-- formula-not-decoded --> <!-- formula-not-decoded --> <!-- formula-not-decoded --> = β 2 WFPI ( ϑ ) ( r ) + C FPI ( ϑ ) E µ ( ) 1 ∗ µ 2 ( ϕ ) + r ∥ ϕ ∥ ∞ . Lemma 7 is hence proved. ## B.5 Proofs for the Generalized Cauchy Examples In this section, we provide proofs for the two corollaries in Section 3.2. Proof of Corollary 4. According to [WW15, Corollary 1.2], π ν satisfies α -FPI with parameter C FPI ( ϑ ) for any α ≤ min(2 , ν ) . Therefore it follows from Theorem 3 that <!-- formula-not-decoded --> According to [MHFH + 23, Corollary 22], when ρ X 0 = N (0 , I d ) and d ≥ 2 , R ∞ ( ρ X 0 | π ν ) ≤ ln(2 ν/ 2 Γ( ν/ 2)) + ln( d + ν 2 e ) which implies χ 2 ( ρ X 0 | π ν ) = Θ( ) d . Therefore Corollary 4 follows from (15) and η ∈ (0 , 1) . Proof of Corollary 5. We prove the two part in the Corollary separately: (i) When ν ≥ 1 , according to [WW15, Corollary 1.2] π ν satisfies the 1 -FPI with parameter C FPI (1) . Corollary 3 applies with L = 4( d + ν ) and β = 1 4 / and the iteration complexity of Algorithm 2 is of order O ( C FPI (1) d 1 2 ( d + ν ) 4 ln( χ 2 ( ρ X 0 | π ν ) /ε ) ) . (ii) When ν ∈ (0 1) , , according to [WW15, Corollary 1.2], there exists a positive constant c such that π ν satisfies the 1 -wFPI with parameter <!-- formula-not-decoded --> Theorem 5 implies that <!-- formula-not-decoded --> <!-- formula-not-decoded --> <!-- formula-not-decoded --> For any ε &gt; 0 and k ≥ 1 , pick r = exp ( -2 νR ∞ ( ρ X 0 | π ν ) ) c ν ε ν ( k +1) ν η ν , we have χ 2 ( ρ X k | π ν ) ≤ ε if <!-- formula-not-decoded --> Corollary 3 applies with L = ( d + ν /ν ) and β = ν/ 4 . Therefore, by choosing η = Θ( d -1 2 ( d + ν ) -4 ν ) , the iteration complexity in Algorithm 2 is of order <!-- formula-not-decoded --> ## C Proofs for the Lower Bounds on the Stable Proximal Sampler In this section we introduce the proofs for the lower bounds for the stable proximal sampler with parameter α when the target is the generalized Cauchy density with degrees of freedom strictly smaller than α . The lower bound is proved following the idea introduced in Section 2. Lemma 8. Suppose ( x , y k k ) k are the iterates of the stable proximal sampler with parameter α , step size η and target density π X ∝ exp( -V ) for some V : R d → R . Let G x ( ) = exp( κV ( x )) with κ ∈ (0 , 1) . Then, for every k ≥ 0 , <!-- formula-not-decoded --> where z k , with density p ( α ) 1 , is sampled independently from x k . Proof of Lemma 8. Recall that π X Y | ( x y | ) ∝ π X ( x p ) ( α ) ( η x, y ; ) . We have <!-- formula-not-decoded --> ↦ where Z y k = ∫ π X ( x p ) ( α ) η ( x -y k )d x = E [ π X ( y k + η 1 α z k ) | y k ] and z k is the α -stable random vector with density p ( α ) 1 , which is independent to y k , x k . Let T : R + → R be T r ( ) = r -κ . Since κ ∈ (0 1) , , T is convex and r → rT r ( ) is concave. According to the fact that G x ( ) = T π ( X )( x ) and Jensen's inequality, we have <!-- formula-not-decoded --> Since T is convex, apply Jensen's inequality again and we get <!-- formula-not-decoded --> where z ′ k is the α -stable random vector with density p ( α ) 1 , which is independent to x , z k k and the last identity follows from the self-similarity of α -stable process with ¯ z k ∼ p ( α ) 1 which is independent to x k . Lemma 9. Suppose ( x , y k k ) k are the iterates of the stable proximal sampler with parameter α , step size η and target density π X ∝ exp( -V ) satisfies <!-- formula-not-decoded --> for some ν 2 ∈ (0 , α ) and for all x ∈ R d . Let G x ( ) = exp( κV ( x )) with <!-- formula-not-decoded --> Then, for every k ≥ 0 and for all r &gt; 0 , <!-- formula-not-decoded --> where m ( α ) κ d ( + ν 2 ) = E [ | z k | κ d ( + ν 2 ) ] with z k being an α -stable random vector with density p ( α ) 1 . Moreover, for every N ≥ 0 , <!-- formula-not-decoded --> where ≲ is hiding a uniform positive constant factor. Proof of Lemma 9. Without loss of generality assume V (0) = 0 . Then, we have that, <!-- formula-not-decoded --> Therefore G x ( ) = exp( κV ( x )) ≤ (1 + | x | 2 ) κ d ( + ν 2 ) / 2 , Since κ ∈ ( ν 2 ( d + ν 2 ) -1 , α ( d + ν 2 ) -1 ) , G x ( ) = O | ( x | κ d ( + ν 2 ) ) when | x | ≫ 1 and E [ G x ( k +2 1 α η 1 α z k )] in Lemma 8 is finite. We have <!-- formula-not-decoded --> <!-- formula-not-decoded --> where the first inequality follows from the Young's inequality and m ( α ) = E [ | z k | κ d ( + ν 2 ) ] κ d ( + ν 2 ) with z k being an α -stable random vector with density p ( α ) 1 . (17) follows from Lemma 8. Furthermore, by induction we have <!-- formula-not-decoded --> Pick r = 2 κ d ( + ν 2 ) N and (18) is proved. Proof of Theorem 4. To apply Lemma 1, we choose G x ( ) = exp( κV ( x )) with κ ∈ ( ν 2 ( d + ν 2 ) -1 , α ( d + ν 2 ) -1 ) ⊂ (0 , 1) . Without loss of generality assume V (0) = 0 . Via Assumption 1, we have the estimates for V , <!-- formula-not-decoded --> By Lemma 2 we have <!-- formula-not-decoded --> We then invoke Lemma 1 and Lemma 9 to obtain <!-- formula-not-decoded --> <!-- formula-not-decoded --> The fact that κ ∈ ( ν 2 ( d + ν 1 ) -1 , α ( d + ν 2 ) -1 ) ensures that the supremum on the right side is always positive. In particular, picking y such that <!-- formula-not-decoded --> we obtain that <!-- formula-not-decoded --> <!-- formula-not-decoded --> <!-- formula-not-decoded --> where ≳ is hiding a uniform positive constant factor. Therefore, for any α ∈ ( ν 2 ( d + ν 2 ) d + ν 1 , 2] and δ ∈ (0 , α -ν 2 ( d + ν 2 ) d + ν 1 ) , we can choose κ = α -δ d + ν 2 ∈ ( ν 2 d + ν 1 , α d + ν 2 ) and get that <!-- formula-not-decoded --> <!-- formula-not-decoded --> Theorem 4 then follows by taking τ = α -δ . <!-- image --> Figure 1: Comparison between Gaussian and Stable Proximal Sampler: target is chosen to be onedimensional student-t with center 0 and 4 degrees of freedom; initialization is chosen x 0 = 20 . <!-- image --> ## C.1 Further Discussions on Lower bounds of the stable proximal sampler To derive a lower bound for the stable proximal sampler with parameter α , it is worth mentioning that there is an extra difficulty applying our method when ν ≥ α . Recall that when ν ∈ (0 , α ) , π ν has heavier tail than ρ X k does. Therefore, when we apply <!-- formula-not-decoded --> to study the lower bound, it suffices to derive a lower bound on π ν ( G ≥ y ) , and an upper bound on ρ X k ( G ≥ y ) which is smaller than the lower bound on π ν ( G ≥ y ) . Deriving these bounds is not too hard: the lower bound can be obtained by looking at an explicit integral against π ν directly and the upper bound is derived based on the fractional absolute moment accumulation of the isotropic α -stable random variables along the stable proximal sampler. However, when ν ≥ α , we expect that ρ X k has heavier tail than π ν . Therefore, to apply (19), we need to find an upper bound on π ν ( G ≥ y ) , and a lower bound on ρ X k ( G ≥ y ) which is smaller than the upper bound on π ν ( G ≥ y ) . Notice that ρ X k ( G ≥ y ) is a quantity varying along the trajectory of the stable proximal sampler. Deriving a lower bound along the trajectory is essentially more challenging than deriving an upper bound. In order to derive a satisfying lower bound in this case, it hence remains to characterize the stable proximal sampler as an approximation of an appropriate gradient flow, just as that the Browniandriven proximal sampler can be interpret as the entropy-regularized JKO scheme in [CCSW22]; see also Section 5. To understand this kind of gradient flow approximations itself is an interesting future work as it may help us to understand and characterize the class of MCMC samplers that utilize heavy-tail samples to approximate lighter-tail target densities, which is non-standard compared to commonly used MCMC samplers such as ULA, MALA, etc. ## D Numerical Illustrations In this section, we present numerical results that illustrate the improved performance of the proximal sampler with stable oracles ( α = 1 ) compared to that with Gaussian oracles. We first sample from the one-dimensional student-t distribution with center zero and 4 degrees of freedom by running the proximal samplers with different oracles in parallel for 100 times. Each individual chain is run for 100 iterations with step-size η = 0 1 . . Figures 1,2,3 present the convergence results for different initializations x 0 = 20 5 , , -5 respectively. In each figure, the first column shows the means and variances of the iterates along the trajectories; the center column shows the histograms of the last iterates and the target density (red curve); the last column shows the convergence of Wasserstein2 distance along the trajectories. We also sample from the two-dimensional student-t distribution with center at the origin and 4 degrees of freedom by running the proximal samplers with different oracles in parallel for 30 times. Each individual chain is run for 20 iterations with step-size η = 0 1 . with the initialization at x 0 = [5 1] , . In Figure 4, we present the convergence results, the first column showing the means and variances of the first-coordinates along the trajectories, the center column showing the histograms of first-coordinate in the last iterates and the first-coordinate marginal density of the target distribution (red curve), and the last column showing the convergence of Wasserstein2 distance along the trajectories. <!-- image --> Figure 2: Comparison between Gaussian and Stable Proximal Sampler: target is chosen to be onedimensional student-t with center 0 and 4 degrees of freedom; initialization is chosen x 0 = 5 . Figure 3: Comparison between Gaussian and Stable Proximal Sampler: target is chosen to be onedimensional student-t with center 0 and 4 degrees of freedom; initialization is chosen x 0 = -5 . <!-- image --> Figure 4: Comparison between Gaussian and Stable Proximal Sampler: target is chosen to be twodimensional student-t with center (0 0) , and 4 degrees of freedom; initialization is chosen x 0 = [5 1] , . <!-- image --> ## NeurIPS Paper Checklist ## 1. Claims Question: Do the main claims made in the abstract and introduction accurately reflect the paper's contributions and scope? Answer: [Yes] Justification: The main claim in the abstract and introduction is the separation result between Gaussian and Proximal Sampler. The rest of the sections are exactly stating (and proving) the aforementioned separation result. ## Guidelines: - · The answer NA means that the abstract and introduction do not include the claims made in the paper. - · The abstract and/or introduction should clearly state the claims made, including the contributions made in the paper and important assumptions and limitations. A No or NA answer to this question will not be perceived well by the reviewers. - · The claims made should match theoretical and experimental results, and reflect how much the results can be expected to generalize to other settings. - · It is fine to include aspirational goals as motivation as long as it is clear that these goals are not attained by the paper. ## 2. Limitations Question: Does the paper discuss the limitations of the work performed by the authors? Answer: [Yes] Justification: Please see Section 5. ## Guidelines: - · The answer NA means that the paper has no limitation while the answer No means that the paper has limitations, but those are not discussed in the paper. - · The authors are encouraged to create a separate "Limitations" section in their paper. - · The paper should point out any strong assumptions and how robust the results are to violations of these assumptions (e.g., independence assumptions, noiseless settings, model well-specification, asymptotic approximations only holding locally). The authors should reflect on how these assumptions might be violated in practice and what the implications would be. - · The authors should reflect on the scope of the claims made, e.g., if the approach was only tested on a few datasets or with a few runs. In general, empirical results often depend on implicit assumptions, which should be articulated. - · The authors should reflect on the factors that influence the performance of the approach. For example, a facial recognition algorithm may perform poorly when image resolution is low or images are taken in low lighting. Or a speech-to-text system might not be used reliably to provide closed captions for online lectures because it fails to handle technical jargon. - · The authors should discuss the computational efficiency of the proposed algorithms and how they scale with dataset size. - · If applicable, the authors should discuss possible limitations of their approach to address problems of privacy and fairness. - · While the authors might fear that complete honesty about limitations might be used by reviewers as grounds for rejection, a worse outcome might be that reviewers discover limitations that aren't acknowledged in the paper. The authors should use their best judgment and recognize that individual actions in favor of transparency play an important role in developing norms that preserve the integrity of the community. Reviewers will be specifically instructed to not penalize honesty concerning limitations. ## 3. Theory Assumptions and Proofs Question: For each theoretical result, does the paper provide the full set of assumptions and a complete (and correct) proof? ## Answer: [Yes] Justification: The assumptions are listed in the respective theorem. The (correct) proofs are provided in the appendix. ## Guidelines: - · The answer NA means that the paper does not include theoretical results. - · All the theorems, formulas, and proofs in the paper should be numbered and crossreferenced. - · All assumptions should be clearly stated or referenced in the statement of any theorems. - · The proofs can either appear in the main paper or the supplemental material, but if they appear in the supplemental material, the authors are encouraged to provide a short proof sketch to provide intuition. - · Inversely, any informal proof provided in the core of the paper should be complemented by formal proofs provided in appendix or supplemental material. - · Theorems and Lemmas that the proof relies upon should be properly referenced. ## 4. Experimental Result Reproducibility Question: Does the paper fully disclose all the information needed to reproduce the main experimental results of the paper to the extent that it affects the main claims and/or conclusions of the paper (regardless of whether the code and data are provided or not)? Answer: [NA] Justification: Our work is primarily theoretical. ## Guidelines: - · The answer NA means that the paper does not include experiments. - · If the paper includes experiments, a No answer to this question will not be perceived well by the reviewers: Making the paper reproducible is important, regardless of whether the code and data are provided or not. - · If the contribution is a dataset and/or model, the authors should describe the steps taken to make their results reproducible or verifiable. - · Depending on the contribution, reproducibility can be accomplished in various ways. For example, if the contribution is a novel architecture, describing the architecture fully might suffice, or if the contribution is a specific model and empirical evaluation, it may be necessary to either make it possible for others to replicate the model with the same dataset, or provide access to the model. In general. releasing code and data is often one good way to accomplish this, but reproducibility can also be provided via detailed instructions for how to replicate the results, access to a hosted model (e.g., in the case of a large language model), releasing of a model checkpoint, or other means that are appropriate to the research performed. - · While NeurIPS does not require releasing code, the conference does require all submissions to provide some reasonable avenue for reproducibility, which may depend on the nature of the contribution. For example - (a) If the contribution is primarily a new algorithm, the paper should make it clear how to reproduce that algorithm. - (b) If the contribution is primarily a new model architecture, the paper should describe the architecture clearly and fully. - (c) If the contribution is a new model (e.g., a large language model), then there should either be a way to access this model for reproducing the results or a way to reproduce the model (e.g., with an open-source dataset or instructions for how to construct the dataset). - (d) We recognize that reproducibility may be tricky in some cases, in which case authors are welcome to describe the particular way they provide for reproducibility. In the case of closed-source models, it may be that access to the model is limited in some way (e.g., to registered users), but it should be possible for other researchers to have some path to reproducing or verifying the results. ## 5. Open access to data and code Question: Does the paper provide open access to the data and code, with sufficient instructions to faithfully reproduce the main experimental results, as described in supplemental material? Answer: [NA] Justification: Our work is primarily theoretical. ## Guidelines: - · The answer NA means that paper does not include experiments requiring code. - · Please see the NeurIPS code and data submission guidelines ( https://nips.cc/ public/guides/CodeSubmissionPolicy ) for more details. - · While we encourage the release of code and data, we understand that this might not be possible, so 'No' is an acceptable answer. Papers cannot be rejected simply for not including code, unless this is central to the contribution (e.g., for a new open-source benchmark). - · The instructions should contain the exact command and environment needed to run to reproduce the results. See the NeurIPS code and data submission guidelines ( https: //nips.cc/public/guides/CodeSubmissionPolicy ) for more details. - · The authors should provide instructions on data access and preparation, including how to access the raw data, preprocessed data, intermediate data, and generated data, etc. - · The authors should provide scripts to reproduce all experimental results for the new proposed method and baselines. If only a subset of experiments are reproducible, they should state which ones are omitted from the script and why. - · At submission time, to preserve anonymity, the authors should release anonymized versions (if applicable). - · Providing as much information as possible in supplemental material (appended to the paper) is recommended, but including URLs to data and code is permitted. ## 6. Experimental Setting/Details Question: Does the paper specify all the training and test details (e.g., data splits, hyperparameters, how they were chosen, type of optimizer, etc.) necessary to understand the results? Answer: [NA] Justification: Our work is primarily theoretical. ## Guidelines: - · The answer NA means that the paper does not include experiments. - · The experimental setting should be presented in the core of the paper to a level of detail that is necessary to appreciate the results and make sense of them. - · The full details can be provided either with the code, in appendix, or as supplemental material. ## 7. Experiment Statistical Significance Question: Does the paper report error bars suitably and correctly defined or other appropriate information about the statistical significance of the experiments? Answer: [NA] Justification: Our work is primarily theoretical. ## Guidelines: - · The answer NA means that the paper does not include experiments. - · The authors should answer "Yes" if the results are accompanied by error bars, confidence intervals, or statistical significance tests, at least for the experiments that support the main claims of the paper. - · The factors of variability that the error bars are capturing should be clearly stated (for example, train/test split, initialization, random drawing of some parameter, or overall run with given experimental conditions). - · The method for calculating the error bars should be explained (closed form formula, call to a library function, bootstrap, etc.) - · The assumptions made should be given (e.g., Normally distributed errors). - · It should be clear whether the error bar is the standard deviation or the standard error of the mean. - · It is OK to report 1-sigma error bars, but one should state it. The authors should preferably report a 2-sigma error bar than state that they have a 96% CI, if the hypothesis of Normality of errors is not verified. - · For asymmetric distributions, the authors should be careful not to show in tables or figures symmetric error bars that would yield results that are out of range (e.g. negative error rates). - · If error bars are reported in tables or plots, The authors should explain in the text how they were calculated and reference the corresponding figures or tables in the text. ## 8. Experiments Compute Resources Question: For each experiment, does the paper provide sufficient information on the computer resources (type of compute workers, memory, time of execution) needed to reproduce the experiments? Answer: [NA] Justification: Our work is primarily theoretical. ## Guidelines: - · The answer NA means that the paper does not include experiments. - · The paper should indicate the type of compute workers CPU or GPU, internal cluster, or cloud provider, including relevant memory and storage. - · The paper should provide the amount of compute required for each of the individual experimental runs as well as estimate the total compute. - · The paper should disclose whether the full research project required more compute than the experiments reported in the paper (e.g., preliminary or failed experiments that didn't make it into the paper). ## 9. Code Of Ethics Question: Does the research conducted in the paper conform, in every respect, with the NeurIPS Code of Ethics https://neurips.cc/public/EthicsGuidelines ? Answer: [Yes] Justification: The authors have read the Ethics Guideline and followed it in the paper preperation. Guidelines: - · The answer NA means that the authors have not reviewed the NeurIPS Code of Ethics. - · If the authors answer No, they should explain the special circumstances that require a deviation from the Code of Ethics. - · The authors should make sure to preserve anonymity (e.g., if there is a special consideration due to laws or regulations in their jurisdiction). ## 10. Broader Impacts Question: Does the paper discuss both potential positive societal impacts and negative societal impacts of the work performed? Answer: [NA] Justification: Our work is primarily theoretical. Guidelines: - · The answer NA means that there is no societal impact of the work performed. - · If the authors answer NA or No, they should explain why their work has no societal impact or why the paper does not address societal impact. - · Examples of negative societal impacts include potential malicious or unintended uses (e.g., disinformation, generating fake profiles, surveillance), fairness considerations (e.g., deployment of technologies that could make decisions that unfairly impact specific groups), privacy considerations, and security considerations. - · The conference expects that many papers will be foundational research and not tied to particular applications, let alone deployments. However, if there is a direct path to any negative applications, the authors should point it out. For example, it is legitimate to point out that an improvement in the quality of generative models could be used to generate deepfakes for disinformation. On the other hand, it is not needed to point out that a generic algorithm for optimizing neural networks could enable people to train models that generate Deepfakes faster. - · The authors should consider possible harms that could arise when the technology is being used as intended and functioning correctly, harms that could arise when the technology is being used as intended but gives incorrect results, and harms following from (intentional or unintentional) misuse of the technology. - · If there are negative societal impacts, the authors could also discuss possible mitigation strategies (e.g., gated release of models, providing defenses in addition to attacks, mechanisms for monitoring misuse, mechanisms to monitor how a system learns from feedback over time, improving the efficiency and accessibility of ML). ## 11. Safeguards Question: Does the paper describe safeguards that have been put in place for responsible release of data or models that have a high risk for misuse (e.g., pretrained language models, image generators, or scraped datasets)? Answer: [NA] Justification: Our work is primarily theoretical. ## Guidelines: - · The answer NA means that the paper poses no such risks. - · Released models that have a high risk for misuse or dual-use should be released with necessary safeguards to allow for controlled use of the model, for example by requiring that users adhere to usage guidelines or restrictions to access the model or implementing safety filters. - · Datasets that have been scraped from the Internet could pose safety risks. The authors should describe how they avoided releasing unsafe images. - · We recognize that providing effective safeguards is challenging, and many papers do not require this, but we encourage authors to take this into account and make a best faith effort. ## 12. Licenses for existing assets Question: Are the creators or original owners of assets (e.g., code, data, models), used in the paper, properly credited and are the license and terms of use explicitly mentioned and properly respected? Answer: [NA] Justification: Our work is primarily theoretical. ## Guidelines: - · The answer NA means that the paper does not use existing assets. - · The authors should cite the original paper that produced the code package or dataset. - · The authors should state which version of the asset is used and, if possible, include a URL. - · The name of the license (e.g., CC-BY 4.0) should be included for each asset. - · For scraped data from a particular source (e.g., website), the copyright and terms of service of that source should be provided. - · If assets are released, the license, copyright information, and terms of use in the package should be provided. For popular datasets, paperswithcode.com/datasets has curated licenses for some datasets. Their licensing guide can help determine the license of a dataset. - · For existing datasets that are re-packaged, both the original license and the license of the derived asset (if it has changed) should be provided. - · If this information is not available online, the authors are encouraged to reach out to the asset's creators. ## 13. New Assets Question: Are new assets introduced in the paper well documented and is the documentation provided alongside the assets? Answer: [NA] Justification: Our work is primarily theoretical. ## Guidelines: - · The answer NA means that the paper does not release new assets. - · Researchers should communicate the details of the dataset/code/model as part of their submissions via structured templates. This includes details about training, license, limitations, etc. - · The paper should discuss whether and how consent was obtained from people whose asset is used. - · At submission time, remember to anonymize your assets (if applicable). You can either create an anonymized URL or include an anonymized zip file. ## 14. Crowdsourcing and Research with Human Subjects Question: For crowdsourcing experiments and research with human subjects, does the paper include the full text of instructions given to participants and screenshots, if applicable, as well as details about compensation (if any)? Answer: [NA] Justification: Our work is primarily theoretical. ## Guidelines: - · The answer NA means that the paper does not involve crowdsourcing nor research with human subjects. - · Including this information in the supplemental material is fine, but if the main contribution of the paper involves human subjects, then as much detail as possible should be included in the main paper. - · According to the NeurIPS Code of Ethics, workers involved in data collection, curation, or other labor should be paid at least the minimum wage in the country of the data collector. ## 15. Institutional Review Board (IRB) Approvals or Equivalent for Research with Human Subjects Question: Does the paper describe potential risks incurred by study participants, whether such risks were disclosed to the subjects, and whether Institutional Review Board (IRB) approvals (or an equivalent approval/review based on the requirements of your country or institution) were obtained? Answer: [NA] Justification: Our work is primarily theoretical. ## Guidelines: - · The answer NA means that the paper does not involve crowdsourcing nor research with human subjects. - · Depending on the country in which research is conducted, IRB approval (or equivalent) may be required for any human subjects research. If you obtained IRB approval, you should clearly state this in the paper. - · We recognize that the procedures for this may vary significantly between institutions and locations, and we expect authors to adhere to the NeurIPS Code of Ethics and the guidelines for their institution. - · For initial submissions, do not include any information that would break anonymity (if applicable), such as the institution conducting the review.
zuWgB7GerW
How DNNs break the Curse of Dimensionality: Compositionality and Symmetry Learning
We show that deep neural networks (DNNs) can efficiently learn any composition of functions with bounded $F_{1}$-norm, which allows DNNs to break the curse of dimensionality in ways that shallow networks cannot. More specifically, we derive a generalization bound that combines a covering number argument for compositionality, and the $F_{1}$-norm (or the related Barron norm) for large width adaptivity. We show that the global minimizer of the regularized loss of DNNs can fit for example the composition of two functions $f^{*}=h\circ g$ from a small number of observations, assuming $g$ is smooth/regular and reduces the dimensionality (e.g. $g$ could be the modulo map of the symmetries of $f^{*}$), so that $h$ can be learned in spite of its low regularity. The measures of reguarity we consider is the Sobolev norm with different levels of differentiability, which is well adapted to the $F_{1}$ norm. We compute scaling laws empirically, and observe phase transitions depending on whether $g$ or $h$ is harder to learn, as predicted by our theory.
https://openreview.net/pdf/d47299e76cea5209510c750a7137c8f8ce0de3bd.pdf
[ { "confidence": 3, "rating": 5, "review_id": "HoG0k5Pjq5", "review_text": "This paper introduces Accordion Networks (AccNets), a novel neural network structure composed of multiple shallow networks. The authors propose a generalization bound for AccNets that leverages the F1-norms and Lipschitz constants of the subnetworks, demonstrating that these networks can break the curse of dimensionality by efficiently learning compositions of Sobolev functions. The paper also provides theoretical insights and empirical validation, showcasing the superior performance of AccNets in learning complex compositional tasks compared to shallow networks and kernel methods.\n\nThe introduction of Accordion Networks (AccNets) as a novel neural network structure is a creative and original contribution. The paper provides a thorough theoretical analysis supported by empirical evidence, ensuring the soundness of its claims. The ability of AccNets to break the curse of dimensionality by learning compositional functions efficiently addresses a fundamental challenge in high-dimensional learning tasks.\n\n1. The practical implementation of the proposed regularization methods might be challenging, particularly the first one requiring infinite width. \n\n2. The paper mentions the difficulty in optimizing Lipschitz constants, which could be a limitation in practical applications.\n\n3. Additional experiments on more diverse real-world datasets could further demonstrate the robustness and generalizability of AccNets.\n\n4. Although the author has discussed the differences between DNN and AccNet, there is still not enough information for me to be sure in which settings to use AccNet and in which settings to use DNN. More clear differences and applicable conditions, especially the shortcomings of each need to be pointed out.\n\nCan the authors provide more details on the computational complexity of training Accordion Networks compared to traditional DNNs?\n\nHow sensitive are the generalization bounds to the choice of hyperparameters, particularly the Lipschitz constants and F1-norms?\n\nAre there any specific types of tasks or datasets where Accordion Networks might not perform as well as traditional methods?" }, { "confidence": 3, "rating": 7, "review_id": "QgS94f64r7", "review_text": "The authors present a generalization bound for deep neural networks that describes how depth enables models to learn functions that are compositions of Sobolev functions. To do this, they both prove a generalization bound for compositions of accordion networks (densely connected networks with a low-rank weight structure) and for compositions of Sobolev functions. They then present a sample efficiency result for different kinds of regularization on accordion networks.\n\nI really liked this paper and would like to see it accepted to NeurIPS. It addresses an important question: how does depth change generalization bounds for deep neural networks? To my knowledge, not many papers so far have addressed this question and I found the findings presented here very interesting and well embedded within prior methodology.\n\nI also found the paper very well written. I found it easy to follow along despite the highly technical nature of the results (note that I did not check the proofs in particular detail). I especially appreciated the remarks explaining different potential extensions and limitations.\n\nFinally, the theory appears to be able to explain certain empirical phenomena (in networks trained under realistic paradigms) at least qualitatively (though note that I had a few questions I will mention under weaknesses and questions). This indicates to me that it is a promising way for thinking about generalization in deep neural networks.\n\n1. I would like to see a more thorough comparison with shallow networks and generalization bounds, as this comparison is a central argument for the usefulness of the presented theory. While it is clear how the findings for the shallow network are a special case of the findings on the deep networks (as presented in Thm. 1), it remains a bit unclear to me how the theory can explain improved generalization in deep compared to shallow networks. The authors certainly present different several pieces of evidence on this: both Fig. 1 and Fig. 3 demonstrate that shallow networks exhibit worse scaling. I also appreciated the theoretical explanation of a particular contrast in l. 256-261. However, I think it would be really useful to provide a general theoretical explanation for this difference and test it empirically: would it be possible to extend the theoretical comparison in l. 256-261 to the general experimental setup studied in the figures --- and if so, would this theoretical comparison predict the conditions under which deep networks have the strongest advantages over shallow networks (or perhaps the conditions under which they don't perform that much better)? Not only would this serve as a useful validation of the theory, I think it would also provide a more extensive intuition for the authors' findings.\n\n2. I appreciated the fact that the authors compare their findings with related work wherever this becomes relevant. However, I think a (potentially brief) section comparing the results here to other theoretical investigations of depth in deep networks (perhaps using different approaches) would be useful. \n\n3. The linked codebase does not contain the notebooks indicated in the README as far as I can tell and therefore currently can't be used to directly reproduce the findings.\n\n4. I believe the figures would still benefit from error bars or some other indication of the overall statistical error in the findings. I agree that the main contribution of this paper is theoretical, but since the experiments test the empirical validity of the theory, I believe it is nevertheless important to get a sense for the overall deviation in these findings (e.g. across model seeds). If the authors are concerned about a lack of clarity, they could leave the bars out of the main figures but add supplementary figures with error bars. Moreover, some of the lines in Fig. 1 do contain error bars and it would be good to clarify what these error bars represent.\n\n1. Do you think my suggestion in point 1 of the weaknesses make sense or do you have a reason why you see it as unnecessary?\n\n2. As far as I understand, the reason for the asymmetry between $\\nu_g$ and $\\nu_h$ in Fig. 2 is the different dimensionality, correct? It would be good to mention these dimensionalities, as I was only able to find them in the appendix.\n\n3. Could you clarify why in Fig. 2, you're using the scalings from Prop 3 rather than from Thm. 5?" }, { "confidence": 3, "rating": 6, "review_id": "2IngJYVbr1", "review_text": "The authors introduce accordion networks (AccNets), which are compositions of multiple shallow networks. By leveraging prior workthat computes norm-based generalization bounds for shallow two-layer networks, the authors bound the complexity of a deep AccNet (as measured by its F1 norm) but the sum of the complexities of the individual shallow networks. They empirically observe that the rates predicted on real-world data are roughly representative of the trained networks, and are indeed much better than those for kernels trained on the same tasks. They put forth a nontrivial scaling law for the excess risk: $N^{-\\mathrm{min}(1/2, \\nu_g/d_{in}, \\nu_h/d_{mid})}$ for an Acc Net compared to $\\mathcal L \\sim N^{-\\mathrm{min}(1/2, \\nu_g/d_{in}, \\nu_h/d_{in})}$ for a kernel in terms of the dimensionalities $d$ and Sobolev constants $\\nu$ of the respective spaces and functions. From this, the authors obtain predictions of several phases, that they put forth experiments to verify.\n\nThe paper tackles a very important open question in the theory of deep learning, for which not much progress has been made. By creatively leveraging results for shallow network in composition, the authors arrive at a nontrivial bound for deep nets. The empirics are a very compelling and welcome part of the paper. The phase diagrams illustrate the nontrivial predictivity of the theory, especially at the level of the rates. This may have important implications for scaling laws. Modulo minor revisions in discussion and exposition, the whole paper is quite readable for a relatively broad audience.\n\nI am not sure how compelling the phase plots in Figure 2 are. The bounds in general are extremely loose, however the comparison of the rates in Figure 2c and Figure 3 is very promising. In general, however, it is the experience of the reviewer that measuring a rate is an extremely finicky business. It is therefore important to add a section in the appendix explicitly stating how the rates were obtained and measured. I also strongly encourage the authors to make the code for all figures public. \n\nBecause they are used very early on throughout the paper, it is the opinion of the reviewer that the notions of F1 distance and Sobolev norm should be defined earlier on in the paper. Without this, it seems like the audience will be constrained to the set of learning theorists familiar with these terms. However, if these terms are defined early on, the paper becomes remarkably accessible to a much broader audience.\n\nThe plot labels in Figures 2 and 3 are very difficult to read. \n\nA small comment: I have not seen the term \"modulo space\" used before. Often the term is \"quotient space\" \n\nThe sentence defining the $F_1$ ball (above theorem 1) is confusing, circular, and difficult to read. Please rewrite it.\n\nThe excess rate formula $\\mathcal L \\sim N^{-\\mathrm{min}(1/2, \\nu_g/d_{in}, \\nu_h/d_{mid})}$ is a very important result and I recommend that it be formatted for display, not inline.\n\nHow are you measuring \"dimension\" in 4.1.1? A high-dimensional gaussian with spectral decay of its covariance going as $k^{-\\alpha}$ for capacity exponent $\\alpha$ is nominally \"full dimensional\" since it is not strictly speaking constrained to a sub-manifold, and yet basic results in kernel theory and high-dimensional linear regression can show that the generalization error achieves a much better rate at larger values of $\\alpha$. Specifically, a model with capacity exponent $\\alpha$ and source exponent $r$ achieves a rate of $N^{-2\\alpha min(r, 1)}$. See, e.g. https://arxiv.org/abs/2105.15004 . Such power law anisotropy is in abundant in natural data. In particular shallow two layer networks in the lazy limit can achieve this scaling for such 'easy tasks' with quick spectral decay. On the other hand, the bounds that you state cannot decay faster than $N^{-1/2}$. \n* In this sense, it seems that the bounds (shallow or deep) presented are certainly not tight for some datasets. Am I incorrect in concluding this? Do you have an intuition for what causes the breakdown in correctly predicting the error rates in this case?\n* Given that they breakdown in that setting, what about the datasets that you study makes it so that the scaling law predictions seem to hold?" } ]
## How DNNs break the Curse of Dimensionality: Compositionality and Symmetry Learning Anonymous Author(s) Affiliation Address email ## Abstract We show that deep neural networks (DNNs) can efficiently learn any composition 1 of functions with bounded F 1 -norm, which allows DNNs to break the curse of 2 dimensionality in ways that shallow networks cannot. More specifically, we 3 derive a generalization bound that combines a covering number argument for 4 compositionality, and the F 1 -norm (or the related Barron norm) for large width 5 adaptivity. We show that the global minimizer of the regularized loss of DNNs can 6 fit for example the composition of two functions f ∗ = h ◦ g from a small number 7 of observations, assuming g is smooth/regular and reduces the dimensionality (e.g. 8 g could be the modulo map of the symmetries of f ∗ ), so that h can be learned in 9 spite of its low regularity. The measures of regularity we consider is the Sobolev 10 norm with different levels of differentiability, which is well adapted to the F 1 norm. 11 We compute scaling laws empirically and observe phase transitions depending on 12 whether g or h is harder to learn, as predicted by our theory. 13 ## 1 Introduction 14 - One of the fundamental features of DNNs is their ability to generalize even when the number of 15 - neurons (and of parameters) is so large that the network could fit almost any function [46]. Actually 16 - DNNs have been observed to generalize best when the number of neurons is infinite [8, 21, 20]. 17 - The now quite generally accepted explanation to this phenomenon is that DNNs have an implicit 18 - bias coming from the training dynamic where properties of the training algorithm lead to networks 19 - that generalize well. This implicit bias is quite well understood in shallow networks [11, 36], in 20 - linear networks [24, 30], or in the NTK regime [28], but it remains ill-understood in the general deep 21 - nonlinear case. 22 - In both shallow networks and linear networks, one observes a bias towards small parameter norm 23 - (either implicit [12] or explicit in the presence of weight decay [42]). Thanks to tools such as the 24 - F 1 -norm [5], or the related Barron norm [44], or more generally the representation cost [14], it is 25 - possible to describe the family of functions that can be represented by shallow networks or linear 26 - networks with a finite parameter norm. This was then leveraged to prove uniform generalization 27 - 28 - 29 - bounds (based on Rademacher complexity) over these sets [5], which depend only on the parameter norm, but not on the number of neurons or parameters. - Similar bounds have been proposed for DNNs [7, 6, 39, 33, 25, 40], relying on different types of 30 - norms on the parameters of the network. But it seems pretty clear that we have not yet identified 31 - the 'right' complexity measure for deep networks, as there remains many issues: these bounds are 32 - typically orders of magnitude too large [29, 23], and they tend to explode as the depth L grows [40]. 33 - Two families of bounds are particularly relevant to our analysis: bounds based on covering numbers 34 - which rely on the fact that one can obtain a covering of the composition of two function classes from 35 - covering of the individual classes [7, 25], and path-norm bounds which extend the techniques behind - 36 the F 1 -norm bound from shallow networks to the deep case [32, 6, 23]. 37 - Another issue is the lack of approximation results to accompany these generalization bounds: many 38 - different complexity measures R θ ( ) on the parameters θ of DNNs have been proposed along with 39 - guarantees that the generalization gap will be small as long as R θ ( ) is bounded, but there are often 40 - little to no result describing families of functions that can be approximated with a bounded R θ ( ) 41 - norm. The situation is much clearer in shallow networks, where we know that certain Sobolev spaces 42 can be approximated with bounded F 1 -norm [5]. 43 - We will focus on approximating composition of Sobolev functions, and obtaining close to optimal 44 - rates. This is quite similar to the family of tasks considered [39], though the complexity measure we - 45 consider is quite different, and does not require sparsity of the parameters. 46 ## 1.1 Contribution 47 We consider Accordion Networks (AccNets), which are the composition of multiple shallow 48 networks f L :1 = f L ◦ · · · ◦ f 1 , we prove a uniform generalization bound L ( f L :1 ) -L ˜ N ( f L :1 ) ≲ 49 R f , . . . , f ( 1 L ) log N √ , for a complexity measure 50 - N <!-- formula-not-decoded --> - that depends on the F 1 -norms ∥ f ℓ ∥ F 1 and Lipschitz constanst Lip f ( ℓ ) of the subnetworks, and the 51 intermediate dimensions d , . . . , d 0 L . This use of the F 1 -norms makes this bound independent of the 52 widths w , . . . , w 1 L of the subnetworks, though it does depend on the depth L (it typically grows 53 linearly in L which is still better than the exponential growth often observed). 54 - Any traditional DNN can be mapped to an AccNet (and vice versa), by spliting the middle weight 55 - matrices W ℓ with SVD USV into two matrices U S and SV to obtain an AccNet with 56 - so that the bound can be applied to traditional DNNs with bounded rank. - T √ √ T dimensions d ℓ = Rank W ℓ , 57 - We then show an approximation result: any composition of Sobolev functions f ∗ = f ∗ L ∗ ◦ · · · ◦ f ∗ 1 58 - can be approximated with a network with either a bounded complexity R θ ( ) or a slowly growing one. 59 - Thus under certain assumptions one can show that DNNs can learn general compositions of Sobolev 60 - functions. This ability can be interpreted as DNNs being able to learn symmetries, allowing them to 61 62 avoid the curse of dimensionality in settings where kernel methods or even shallow networks suffer - heavily from it. 63 - Empirically, we observe a good match between the scaling laws of learning and our theory, as well as 64 - qualitative features such as transitions between regimes depending on whether it is harder to learn the 65 - symmetries of a task, or to learn the task given its symmetries. 66 ## 2 Accordion Neural Networks and ResNets 67 - Our analysis is most natural for a slight variation on the traditional fully-connected neural networks - 68 (FCNNs), which we call Accordion Networks, which we define here. Nevertheless, all of our results 69 can easily be adapted to FCNNs. 70 - Accordion Networks (AccNets) are simply the composition of L shallow networks, that is f L :1 = 71 - f L ◦ · · · ◦ f 1 where f ℓ ( z ) = W σ V z ℓ ( ℓ + b ℓ ) for the nonlinearity σ : R → R , the d ℓ × w ℓ matrix 72 - W ℓ , w ℓ × d ℓ -1 matrix V ℓ , and w ℓ -dim. vector b ℓ , and for the widths w , . . . , w 1 L and dimensions 73 - d , . . . , d 0 L . We will focus on the ReLU σ x ( ) = max 0 { , x } for the nonlinearity. The parameters θ are 74 - made up of the concatenation of all ( W ,V , b ℓ ℓ ℓ ) . More generally, we denote f ℓ 2 : ℓ 1 = f ℓ 2 ◦ · · · ◦ f ℓ 1 75 for any 1 ≤ ℓ 1 ≤ ℓ 2 ≤ L . 76 - is large (or even infinitely large), while - We will typically be interested in settings where the widths w ℓ 77 the dimensions d ℓ remain finite or much smaller in comparison, hence the name accordion. 78 - If we add residual connections, i.e. f res 1: L = ( f L + id ) ◦ · · · ◦ ( f 1 + id ) for the same shallow nets 79 f , . . . , f 1 L we recover the typical ResNets. 80 Remark. The only difference between AccNets and FCNNs is that each weight matrix M ℓ of the 81 FCNN is replaced by a product of two matrices M ℓ = V W ℓ ℓ -1 in the middle of the network (such a 82 structure has already been proposed [34]). Given an AccNet one can recover an equivalent FCNN by 83 choosing M ℓ = V W ℓ ℓ -1 , M 0 = V 0 and M L +1 = W L . In the other direction there could be multiple 84 ways to split M ℓ into the product of two matrices, but we will focus on taking V ℓ = U √ S and 85 W ℓ -1 = √ SV T for the SVD decomposition M ℓ = USV T , along with the choice d ℓ = Rank M ℓ . 86 ## 2.1 Learning Setup 87 We consider a traditional learning setup, where we want to find a function f : Ω ⊂ R d in → R d out 88 that minimizes the population loss L ( f ) = E x ∼ π [ ℓ x, f ( ( x ))] for an input distribution π and a 89 ρ -Lipschitz and ρ -bounded loss function ℓ x, y ( ) ∈ [0 , B ] . Given a training set x , . . . , x 1 N of size N 90 we approximate the population loss by the empirical loss ˜ L N ( f ) = 1 N ∑ N i =1 ℓ x , f ( i ( x i )) that can be 91 minimized. 92 - To ensure that the empirical loss remains representative of the population loss, we will prove high 93 probability bounds on the generalization gap ˜ L N ( f ) -L ( f ) uniformly over certain functions families 94 f ∈ F . 95 - For regression tasks , we assume the existence of a true function f ∗ and try to minimize the distance 96 ℓ x, y ( ) = ∥ f ∗ ( x ) - ∥ y p for p ≥ 1 . If we assume that f ∗ ( x ) and y are uniformly bounded then one 97 can easily show that ℓ x, y ( ) is bounded and Lipschitz. We are particularly interested in the cases 98 p ∈ { 1 2 , } , with p = 2 representing the classical MSE, and p = 1 representing a L 1 distance. The 99 p = 2 case is amenable to 'fast rates' which take advantage of the fact that the loss increases very 100 slowly around the optimal solution f ∗ , We do not prove such fast rates (even though it might be 101 possible) so we focus on the p = 1 case. 102 ̸ ̸ For classification tasks on k classes, we assume the existence of a 'true class' function f ∗ : Ω → 103 { 1 , . . . , k } and want to learn a function f : Ω → R k such that the largest entry of f ( x ) is the f ∗ ( k ) -th 104 entry. One can consider the hinge cost ℓ x, y ( ) = max 0 1 { , -( y f ∗ ( k ) -max i = f ∗ ( x ) y i ) } , which is 105 zero whenever the margin y f ∗ ( k ) -max i = f ∗ ( x ) y i is larger than 1 and otherwise equals 1 minus the 106 margin. The hinge loss is Lipschitz and bounded if we assume bounded outputs y = f ( x ) . The 107 cross-entropy loss also fits our setup. 108 ## 3 Generalization Bound for DNNs 109 The reason we focus on accordion networks is that there exists generalization bounds for shallow 110 networks [5, 44], that are (to our knowledge) widely considered to be tight, which is in contrast to the 111 deep case, where many bounds exist but no clear optimal bound has been identified. Our strategy 112 is to extend the results for shallow nets to the composition of multiple shallow nets, i.e. AccNets. 113 Roughly speaking, we will show that the complexity of an AccNet f θ is bounded by the sum of the 114 complexities of the shallow nets f , . . . , f 1 L it is made of. 115 We will therefore first review (and slightly adapt) the existing generalization bounds for shallow 116 networks in terms of their so-called F 1 -norm [5], and then prove a generalization bound for deep 117 AccNets. 118 ## 3.1 Shallow Networks 119 The complexity of a shallow net f ( x ) = Wσ V x ( + b ) , with weights W ∈ R w × d out and 120 V ∈ R d in × w , can be bounded in terms of the quantity C = ∑ w i =1 ∥ W · i ∥ √ ∥ V i · ∥ 2 + b 2 i . 121 First note that the rescaled function 1 C f can be written as a convex combination 1 C f ( x ) = 122 ∑ w i =1 ∥ W · i ∥ √ ∥ V i · ∥ 2 + b 2 i C ¯ W σ V x · i ( ¯ i · + ¯ ) b i for ¯ W · i = W · i ∥ W · i ∥ , ¯ V i · = V i · √ ∥ V i · ∥ 2 + b 2 i , and ¯ b i = b i √ ∥ V i · ∥ 2 + b 2 i , 123 since the coefficients ∥ W · i ∥ √ ∥ V i · ∥ 2 + b 2 i C are positive and sum up to 1. Thus f belongs to C times the 124 convex hull 125 ↦ <!-- formula-not-decoded --> - We call this the F 1 -ball since it can be thought of as the unit ball w.r.t. the F 1 -norm ∥ f ∥ F 1 which we 126 define as the smallest positive scalar s such that 1 1 s f ∈ B F 1 . For more details in the single output 127 case, see [5]. 128 - The generalization gap over any F 1 -ball can be uniformly bounded with high probability: 129 - Theorem 1. For any input distribution π supported on the L 2 ball B (0 , b ) with radius b , we have 130 with probability 1 -δ , over the training samples x , . . . , x 1 N , that for all f ∈ B F 1 (0 , R ) = R B · F 1 131 <!-- formula-not-decoded --> This theorem is a slight variation of the one found in [5]: we simply generalize it to multiple outputs, 132 and also prove it using a covering number argument instead of a direct computation of the Rademacher 133 complexity, which will be key to obtaining a generalization bound for the deep case. But due to this 134 change of strategy we pay a log N cost here and in our later results. We know that the log N term 135 can be removed in Theorem 1 by switching to a Rademacher argument, but we do not know whether 136 it can be removed in deep nets. 137 - Notice how this bound does not depend on the width w , because the F 1 -norm (and the F 1 -ball) 138 themselves do not depend on the width. This matches with empirical evidence that shows that 139 increasing the width does not hurt generalization [8, 21, 20]. 140 - To use Theorem 1 effectively we need to be able to guarantee that the learned function will have a 141 small enough F 1 -norm. The F 1 -norm is hard to compute exactly, but it is bounded by the parameter 142 - norm: if f ( x ) = Wσ V x ( + ) b , then ∥ f ∥ F 1 ≤ 1 2 ( ∥ W ∥ 2 F + ∥ V ∥ 2 F + ∥ ∥ b 2 ) , and this bound is tight 143 144 if the width w is large enough and the parameters are chosen optimally. Adding weight decay/ L 2 - 145 regularization to the cost then leads to bias towards learning with small F 1 norm. ## 3.2 Deep Networks 146 Since an AccNet is simply the composition of multiple shallow nets, the functions represented by an 147 AccNet is included in the set of composition of F 1 balls. More precisely, if ∥ W ℓ ∥ 2 + ∥ V ℓ ∥ 2 + ∥ b ℓ ∥ 2 ≤ 148 2 R ℓ then f L :1 belongs to the set { g L ◦ · · · ◦ g 1 : g ℓ ∈ B F 1 (0 , R ℓ ) } for some R ℓ , which is width 149 agnostic. 150 - As already noticed in [7], the covering number number is well-behaved under composition, this 151 allows us to bound the complexity of AccNets in terms of the individual shallow nets it is made up of: 152 Theorem 2. Consider an accordion net of depth L and widths d L , . . . , d 0 , with corresponding set of 153 functions F = { f L :1 : ∥ f ℓ ∥ F 1 ≤ R , ℓ Lip ( f ℓ ) ≤ ρ ℓ } . With probability 1 -δ over the sampling of the 154 training set X from the distribution π supported in B (0 , b ) , we have for all f ∈ F 155 <!-- formula-not-decoded --> Theorem 2 can be extended to ResNets ( f L + id ) ◦ · · · ◦ ( f 1 + id ) by simply replacing the Lipschitz 156 constant Lip f ( ℓ ) by Lip f ( ℓ + id ) . 157 158 The Lipschitz constants Lip f ( ℓ ) are difficult to compute exactly, so it is easiest to simply bound it 159 160 by the product of the operator norms Lip f ( ℓ ) ≤ ∥ W ℓ ∥ op ∥ V ℓ ∥ op , but this bound can often be quite loose. The fact that our bound depends on the Lipschitz constants rather than the operator norms 161 ∥ W ℓ ∥ op , ∥ V ℓ ∥ op is thus a significant advantage. This bound can be applied to a FCNNs with weight matrices M ,... , M 1 L +1 , by replacing the middle 162 M ℓ with SVD decomposition USV T in the middle by two matrices W ℓ -1 = √ SV T and V ℓ = U √ S , 163 so that the dimensions can be chosen as the rank d ℓ = Rank M ℓ +1 . The Frobenius norm of the new 164 matrices equal the nuclear norm of the original one ∥ W ℓ -1 ∥ 2 F = ∥ V ℓ ∥ 2 F = ∥ M ℓ ∥ ∗ . Some bounds 165 1 This construction can be used for any convex set B that is symmetric around zero ( B = -B ) to define a norm whose unit ball is B . This correspondence between symmetric convex sets and norms is well known. Figure 1: Visualization of scaling laws. We observe that deep networks (either AccNets or DNNs) achieve better scaling laws than kernel methods or shallow networks on certain compositional tasks, in agreement with our theory. We also see that our new generalization bounds approximately recover the right saling laws (even though they are orders of magnitude too large overall). We consider a compositional true function f ∗ = h ◦ g where g maps from dimension 15 to 3 while h maps from 3 to 20, and we denote ν , ν g h for the number of times g, h are differentiable. In the first plot ν g = 8 , ν h = 1 so that g is easy to learn while h is hard, whereas in the second plot ν g = 9 , ν h = 9 , so both g and h are relatively easier. The third plot presents the decay in test error and generalization bounds for networks evaluated using the real-world dataset, WESAD [37]. <!-- image --> - assuming rank sparsity of the weight matrices also appear in [41]. And several recent results have 166 shown that weight-decay leads to a low-rank bias on the weight matrices of the network [27, 26, 19] 167 and replacing the Frobenius norm regularization with a nuclear norm regularization (according to the 168 above mentioned equivalence) will only increase this low-rank bias. 169 - We compute in Figure 1 the upper bound of Theorem 2 for both AccNets and DNNs, and even though 170 we observe a very large gap (roughly of order 10 3 ), we do observe that it captures rate/scaling of the 171 test error (the log-log slope) well. So this generalization bound could be well adapted to predicting 172 rates, which is what we will do in the next section. 173 174 Remark. Note that if one wants to compute this upper bound in practical setting, it is important to 175 176 177 178 179 180 train with L 2 regularization until the parameter norm also converges (this often happens after the train and test loss have converged). The intuition is that at initialization, the weights are initialized randomly, and they contribute a lot to the parameter norm, and thus lead to a larger generalization bound. During training with weight decay, these random initial weights slowly vanish, thus leading to a smaller parameter norm and better generalization bound. It might be possible to improve our generalization bounds to take into account the randomness at initialization to obtain better bounds 181 throughout training, but we leave this to future work. ## 4 Breaking the Curse of Dimensionality with Compositionality 182 In this section we study a large family of functions spaces, obtained by taking compositions of 183 Sobolev balls. We focus on this family of tasks because they are well adapted to the complexity 184 measure we have identified, and because kernel methods and even shallow networks do suffer from 185 the curse of dimensionality on such tasks, whereas deep networks avoid it (e.g. Figure 1). 186 More precisely, we will show that these sets of functions can be approximated by a AccNets with 187 bounded (or in some cases slowly growing) complexity measure 188 <!-- formula-not-decoded --> This will then allow us show that AccNets can (assuming global convergence) avoid the curse of 189 dimensionality, even in settings that should suffer from the curse of dimensionality, when the input 190 dimension is large and the function is not very smooth (only a few times differentiable). 191 Figure 2: A comparison of empirical and theoretical error rates. The first plot illustrates the log decay rate of the test error with respect to the dataset size N based on our empirical simulations. The second plot depicts the theoretical decay rate of the test error as discussed in Section 4.1, -min { 1 2 , ν g d in , ν h d mid } . The final plot on the right displays the difference between the two. The lower left region represents the area where g is easier to learn than h , the upper right where h is easier to learn than g , and the lower right region where both f and g are easy. <!-- image --> . ## 4.1 Composition of Sobolev Balls 192 The family of Sobolev norms capture some notion of regularity of a function, as it measures the size 193 of its derivatives. The Sobolev norm of a function f : R d in → R is defined in terms of its derivatives 194 ∂ α x f for some d in -multi-index α , namely the W ν,p ( π ) -Sobolev norm with integer ν and p ≥ 1 is 195 defined as 196 <!-- formula-not-decoded --> Note that the derivative ∂ α x f only needs to be defined in the 'weak' sense, which means that even 197 non-differentiable functions such as the ReLU functions can actually have finite Sobolev norm. 198 The Sobolev balls B W ν,p ( π ) (0 , R ) = { f : ∥ f ∥ W ν,p ( π ) ≤ R } are a family of function spaces with a 199 range of regularity (the larger ν , the more regular). This regularity makes these spaces of functions 200 learnable purely from the fact that they enforce the function f to vary slowly as the input changes. 201 Indeed we can prove the following generalization bound: 202 Proposition 3. Given a distribution π with support the L 2 ball with radius b, we have that with 203 probability 1 -δ for all functions f ∈ F = { f : ∥ f ∥ W ν, 2 ≤ R, ∥ f ∥ ∞ ≤ R } 204 <!-- formula-not-decoded --> 205 where E r ( N ) = N -1 2 if r &gt; 1 2 , E r ( N ) = N -1 2 log N if r = 1 2 , and E r ( N ) = N -r if r &lt; 1 2 . But this result also illustrates the curse of dimensionality: the differentiability ν needs to scale with 206 the input dimension d in to obtain a reasonable rate. If instead ν is constant and d in grows, then the 207 number of datapoints N needed to guarantee a generalization gap of at most ϵ scales exponentially in 208 d in , i.e. N ∼ ϵ -d in ν . One way to interpret this issue is that regularity becomes less and less useful the 209 larger the dimension: knowing that similar inputs have similar outputs is useless in high dimension 210 where the closest training point x i to a test point x is typically very far away. 211 ## 4.1.1 Breaking the Curse of Dimensionality with Compositionality 212 To break the curse of dimensionality, we need to assume some additional structure on the data or task 213 which introduces an 'intrinsic dimension' that can be much lower than the input dimension d in : 214 Manifold hypothesis : If the input distribution lies on a d surf -dimensional manifold, the error rates 215 typically depends on d surf instead of d in [38, 10]. 216 Figure 3: Comparing error rates for shallow and AccNets: shallow nets vs. AccNets, and kernel methods vs. AccNets. The left two graphs shows the empirical decay rate of test error with respect to dataset size (N) for both shallow nets and kernel methods. In contrast to our earlier empirical findings for AccNets, both shallow nets and kernel methods exhibit a slower decay rate in test error. The right two graphs present the differences in log decay rates between shallow nets and AccNets, as well as between kernel methods and AccNets. AccNets almost always obtain better rates, with a particularly large advantage at the bottom and middle-left. <!-- image --> . Known Symmetries: If f ∗ ( g · x ) = f ∗ ( x ) for a group action · w.r.t. a group G , then f ∗ can be 217 written as the composition of a modulo map g ∗ : R d in → R d in / G which maps pairs of inputs which 218 are equivalent up to symmetries to the same value (pairs x, y s.t. y = g · x for some g ∈ G ), and then 219 a second function h ∗ : R d in / G → R d out , then the complexity of the task will depend on the dimension 220 of the modulo space R d in / G which can be much lower. If the symmetry is known, then one can for 221 example fix g ∗ and only learn h ∗ (though other techniques exist, such as designing kernels or features 222 that respect the same symmetries) [31]. 223 Symmetry Learning: However if the symmetry is not known then both g ∗ and h ∗ have to be learned, 224 and this is where we require feature learning and/or compositionality. Shallow networks are able 225 to learn translation symmetries, since they can learn so-called low-index functions which satisfy 226 f ∗ ( x ) = f ∗ ( Px ) for some projection P (with a statistical complexity that depends on the dimension 227 of the space one projects into, not the full dimension [5, 2]). Low-index functions correspond exactly 228 to the set of functions that are invariant under translation along the kernel ker P . To learn general 229 symmetries, one needs to learn both h ∗ and the modulo map g ∗ simultaneously, hence the importance 230 of feature learning. 231 For g ∗ to be learnable efficiently, it needs to be regular enough to not suffer from the curse of 232 dimensionality, but many traditional symmetries actually have smooth modulo maps, for example 233 the modulo map g ∗ ( x ) = ∥ x ∥ 2 for rotation invariance. This can be understood as a special case of 234 composition of Sobolev functions, whose generalization gap can be bounded: 235 Theorem 4. Consider the function set F = F L ◦ · · · ◦ F 1 where F ℓ = 236 { f ℓ : R d ℓ -1 → R d ℓ s.t. ∥ f ℓ ∥ W ν ℓ , 2 ≤ R , ℓ ∥ f ℓ ∥ ∞ ≤ b , Lip f ℓ ( ℓ ) ≤ ρ ℓ } , and let r min = min ℓ r ℓ for 237 r ℓ = ν ℓ d ℓ -1 , then with probability 1 -δ we have for all f ∈ F 238 <!-- formula-not-decoded --> where C ℓ depends only on d ℓ -1 , d ℓ , ν ℓ , b ℓ -1 . 239 We see that only the smallest ratio r min matters when it comes to the rate of learning. And actually 240 the above result could be slightly improved to show that the sum over all layers could be replaced by 241 a sum over only the layers where the ratio r ℓ leads to the worst rate E r ℓ ( N ) = E r min ( N ) (and the 242 other layers contribute an asymptotically subdominant amount). 243 Coming back to the symmetry learning example, we see that the hardness of learning a function of 244 the type f ∗ = h ◦ g with inner dimension d mid and regularities ν g and ν h , the error rate will be (up 245 to log terms) N -min { 1 2 , νg d in , ν h d mid } . This suggests the existence of three regimes depending on which 246 term attains the minimum: a regime where both g and h are easy to learn and we have N -1 2 learning, 247 a regime g is hard, and a regime where h is hard. The last two regimes differentiate between tasks 248 where learning the symmetry is hard and those where learning the function knowing its symmetries is 249 hard. 250 - In contrast, without taking advantage of the compositional structure, we expect f ∗ to be only 251 min { ν , ν g h } times differentiable, so trying to learn it as a single Sobolev function would lead to an 252 1 min { νg ,ν h } 1 νg ν h - error rate of N -min { 2 , d in } = N -min { 2 , d in , d in } which is no better than the compositional 253 rate, and is strictly worse whenever ν h &lt; ν g and ν h d in &lt; 1 2 (we can always assume d mid ≤ d in since 254 one could always choose d = id ). 255 - Furthermore, since multiple compositions are possible, one can imagine a hierarchy of symmetries 256 that slowly reduce the dimensionality with less and less regular modulo maps. For example one could 257 imagine a composition f L ◦ · · · ◦ f 1 with dimensions d ℓ = d 0 2 -ℓ and regularities ν ℓ = d 0 2 -ℓ so that 258 -ℓ 1 259 the ratios remain constant r ℓ = d 0 2 1 - d 2 - ℓ +1 = 2 , leading to an almost parametric rate of N log N 260 2 0 even though the function may only be d 0 2 - L times differentiable. Without compositionality, the rate would only be N - 2 . - L 261 Remark. In the case of a single Sobolev function, one can show that the rate E ν d / ( N ) is in some 262 sense optimal, by giving an information theoretic lower bound with matching rate. A naive argument 263 suggests that the rate of E min { r 1 ,...,r L } ( N ) should similarly be optimal: assume that the minimum 264 r ℓ is attained at a layer ℓ , then one can consider the subset of functions such that the image 265 f ℓ -1:1 ( B (0 , r )) contains a ball B z, r ( ′ ) ⊂ R d ℓ -1 and that the function f L ℓ : +1 is β -non-contracting 266 ∥ f L ℓ : +1 ( x ) -f L ℓ : +1 ( y ) ∥ ≥ β ∥ x - ∥ y , then learning f L :1 should be as hard as learning f ℓ over the 267 ball B z, r ( ′ ) (more rigorously this could be argued from the fact that any ϵ -covering of f L :1 can be 268 mapped to an ϵ / β -covering of f ℓ ), thus forcing a rate of at least E r ℓ ( N ) = E min { r 1 ,...,r L } ( N ) . 269 An analysis of minimax rates in a similar setting has been done in [22]. 270 ## 4.2 Breaking the Curse of Dimensionality with AccNets 271 Now that we know that composition of Sobolev functions can be easily learnable, even in settings 272 where the curse of dimensionality should make it hard to learn them, we need to find a model that can 273 achieve those rates. Though many models are possible 2 , we focus on DNNs, in particular AccNets. 274 Assuming convergence to a global minimum of the loss of sufficiently wide AccNets with two types 275 of regularization, one can guarantee close to optimal rates: 276 - Theorem 5. Given a true function f ∗ L ∗ :1 = f ∗ L ∗ ◦ · · · ◦ f ∗ 1 going through the dimensions d , . . . , d ∗ 0 ∗ L ∗ , 277 along with a continuous input distribution π 0 supported in B (0 , b 0 ) , such that the distributions π ℓ 278 - Lip f ( ∗ ℓ ) ≤ ρ ℓ . For an infinite width AccNet with L ≥ L ∗ and dimensions d ℓ ≥ d , . . . , d ∗ 1 ∗ L ∗ -1 , we 281 have for the ratios ˜ r ℓ = ν ℓ d ∗ ℓ +3 : 282 - of f ∗ ℓ ( x ) (for x ∼ π 0 ) are continuous too and supported inside B (0 , b ℓ ) ⊂ R d ∗ ℓ . Further assume 279 that there are differentiabilities ν ℓ and radii R ℓ such that ∥ f ∗ ℓ ∥ W ν ℓ , 2 ( B (0 ,b ℓ )) ≤ R ℓ , and ρ ℓ such that 280 283 ↦ 284 - · At a global minimizer ˆ f L :1 of the regularized loss f , . . . , f 1 L → ˜ L N ( f L :1 ) + λ ∏ L ℓ =1 Lip f ( ℓ ) ∑ L ℓ =1 ∥ f ℓ ∥ F 1 Lip f ( ℓ ) √ d ℓ -1 + d ℓ , we have L ( ˆ f L :1 ) = ˜ ( O N -min { 1 2 ,r ˜ 1 ,...,r ˜ L ∗ } ) . 285 ↦ 286 - · At a global minimizer ˆ f L :1 of the regularized loss f , . . . , f 1 L →L ˜ N ( f L :1 )+ λ ∏ L ℓ =1 ∥ f ℓ ∥ F 1 , we have L ( ˆ f L :1 ) = ˜ ( O N -1 2 + ∑ L ∗ ℓ =1 max 0 ˜ { ,r ℓ - } 1 2 ) . There are a number of limitations to this result. First we assume that one is able to recover the global 287 minimizer of the regularized loss, which should be hard in general 3 (we already know from [5] that 288 this is NP-hard for shallow networks and a simple F 1 -regularization). Note that it is sufficient to 289 recover a network f L :1 whose regularized loss is within a constant of the global minimum, which 290 2 One could argue that it would be more natural to consider compositions of kernel method models, for example a composition of random feature models. But this would lead to a very similar model: this would be equivalent to a AccNet where only the W ℓ weights are learned, while the V , b ℓ ℓ weights remain constant. Another family of models that should have similar properties is Deep Gaussian Processes [15]. 3 Note that the unregularized loss can be optimized polynomially, e.g. in the NTK regime [28, 3, 16], but this is an easier task than findinig the global minimum of the regularized loss where one needs to both fit the data, and do it with an minimal regularization term. - might be easier to guarantee, but should still be hard in general. The typical method of training with 291 292 GD on the regularized loss is a greedy approach, which might fail in general but could recover almost 293 optimal parameters under the right conditions (some results suggest that training relies on first order 294 correlations to guide the network in the right direction [2, 1, 35]). - We propose two regularizations because they offer a tradeoff: 295 - First regularization: The first regularization term leads to almost optimal rates, up to the change 296 from r ℓ = ν ℓ d ∗ ℓ to r ℓ = ν ℓ d ∗ ℓ +3 which is negligible for large dimensions d ℓ and differentiabilities ν ℓ . The 297 first problem is that it requires an infinite width at the moment, because we were not able to prove 298 that a function with bounded F 1 -norm and Lipschitz constant can be approximated by a sufficiently 299 wide shallow networks with the same (or close) F 1 -norm and Lipschitz constant (we know from [5] 300 that it is possible without preserving the Lipschitzness). We are quite hopeful that this condition 301 might be removed in future work. 302 The second and more significant problem is that the Lipschitz constants Lip f ( ℓ ) are difficult to 303 optimize over. For finite width networks it is in theory possible to take the max over all linear regions, 304 but the complexity might be unreasonable. It might be more reasonable to leverage an implicit bias 305 instead, such as a large learning rate, because a large Lipschitz constant implies that the nework is 306 sensible to small changes in its parameters, so GD with a large learning rate should only converge to 307 minima with a small Lipschitz constant (such a bias is described in [26]). It might also be possible to 308 replace the Lipschitz constant in our generalization bounds, possibly along the lines of [43]. 309 - Second regularization: The second regularization term actually does not require an infinite width, 310 only a sufficiently large one. Also its regularization term is equivalent to ∏ ( ∥ W ℓ ∥ 2 + ∥ V ℓ ∥ 2 + ∥ b ℓ ∥ 2 ) 311 which is much closer to the traditional L 2 -regularization (and actually one could prove the same 312 or very similar rates for L 2 -regularization). The issue is that it lead to rates that could be far from 313 optimal depending on the ratios ˜ r ℓ : it recovers the same rate as the first regularization term if no 314 more than one ratio ˜ r ℓ is smaller than 1 2 , but if many of these ratios are above 1 2 , it can be arbitrarily 315 smaller. 316 - In Figure 2, we compare the empirical rates (by doing a linear fit on a log-log plot of test error as a 317 function of N ) and the predicted optimal rates min { 1 2 , ν g d in , ν h d mid } and observe a pretty good match. 318 Though surprisingly, it appears the the empirical rates tend to be slightly better than the theoretical 319 ones. 320 321 Remark. As can be seen in the proof of Theorem5, when the depth L is strictly larger than the true 322 323 324 325 depth L ∗ , one needs to add identity layers, leading to a so-called Bottleneck structure, which was proven to be optimal and observed empirically in [27, 26, 45]. These identity layers add a term ( ∗ ∗ that scales linearly in the additional depth - √ L L ) d to the first regularization, and an exponential min N prefactor (2 d ∗ L - L ∗ min ) to the second. It might be possible to remove these factors by leveraging the 326 bottleneck structure, or simply by switching to ResNets. ## 5 Conclusion 327 We have given a generalization bound for Accordion Networks and as an extension Fully-Connected 328 networks. It depends on F 1 -norms and Lipschitz constants of its shallow subnetworks. This allows us 329 to prove under certain assumptions that AccNets can learn general compositions of Sobolev functions 330 efficiently, making them able to break the curse of dimensionality in certain settings, such as in the 331 presence of unknown symmetries. 332 333 ## References 334 335 336 - [1] Emmanuel Abbe, Enric Boix Adsera, and Theodor Misiakiewicz. The merged-staircase property: a necessary and nearly sufficient condition for sgd learning of sparse functions on two-layer neural networks. In Conference on Learning Theory , pages 4782-4887. PMLR, 2022. - [2] Emmanuel Abbe, Enric Boix-Adserà, Matthew Stewart Brennan, Guy Bresler, and 337 Dheeraj Mysore Nagaraj. The staircase property: How hierarchical structure can guide deep 338 learning. In A. Beygelzimer, Y. Dauphin, P. Liang, and J. Wortman Vaughan, editors, Advances 339 in Neural Information Processing Systems , 2021. 340 | 341 342 | [3] | Zeyuan Allen-Zhu, Yuanzhi Li, and Zhao Song. A convergence theory for deep learning via over-parameterization. pages 242-252, 2019. | |-----------------|-------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 343 344 | [4] | Kendall Atkinson and Weimin Han. Spherical harmonics and approximations on the unit sphere: an introduction , volume 2044. Springer Science &Business Media, 2012. | | 345 346 | [5] | Francis Bach. Breaking the curse of dimensionality with convex neural networks. The Journal of Machine Learning Research , 18(1):629-681, 2017. | | 347 348 | [6] | Andrew R Barron and Jason MKlusowski. Complexity, statistical risk, and metric entropy of deep nets using total path variation. stat , 1050:6, 2019. | | 349 350 | [7] | Peter L Bartlett, Dylan J Foster, and Matus J Telgarsky. Spectrally-normalized margin bounds for neural networks. Advances in neural information processing systems , 30, 2017. | | 351 352 353 | [8] | Mikhail Belkin, Daniel Hsu, Siyuan Ma, and Soumik Mandal. Reconciling modern machine- learning practice and the classical bias-variance trade-off. Proceedings of the National Academy of Sciences , 116(32):15849-15854, 2019. | | 354 355 | [9] | M. S. Birman and M. Z. Solomjak. Piecewise-polynomial approximations of functions of the classes W α p . Mathematics of The USSR-Sbornik , 2:295-317, 1967. | | 356 357 358 | [10] | Minshuo Chen, Haoming Jiang, Wenjing Liao, and Tuo Zhao. Nonparametric regression on low-dimensional manifolds using deep relu networks: Function approximation and statistical recovery. Information and Inference: A Journal of the IMA , 11(4):1203-1253, 2022. | | 359 360 361 | [11] | Lénaïc Chizat and Francis Bach. On the Global Convergence of Gradient Descent for Over- parameterized Models using Optimal Transport. In Advances in Neural Information Processing Systems 31 , pages 3040-3050. Curran Associates, Inc., 2018. | | 362 363 364 365 | [12] | Lénaïc Chizat and Francis Bach. Implicit bias of gradient descent for wide two-layer neural networks trained with the logistic loss. In Jacob Abernethy and Shivani Agarwal, editors, Proceedings of Thirty Third Conference on Learning Theory , volume 125 of Proceedings of Machine Learning Research , pages 1305-1338. PMLR, 09-12 Jul 2020. | | 366 | [13] | Feng Dai. Approximation theory and harmonic analysis on spheres and balls . Springer, 2013. | | 367 368 369 | [14] | Zhen Dai, Mina Karzand, and Nathan Srebro. Representation costs of linear neural networks: Analysis and design. In A. Beygelzimer, Y. Dauphin, P. Liang, and J. Wortman Vaughan, editors, Advances in Neural Information Processing Systems , 2021. | | 370 371 372 373 | [15] | Andreas Damianou and Neil D. Lawrence. Deep Gaussian processes. In Carlos M. Carvalho and Pradeep Ravikumar, editors, Proceedings of the Sixteenth International Conference on Artificial Intelligence and Statistics , volume 31 of Proceedings of Machine Learning Research , pages 207-215, Scottsdale, Arizona, USA, 29 Apr-01 May 2013. PMLR. | | 374 375 376 | [16] | Simon S. Du, Xiyu Zhai, Barnabas Poczos, and Aarti Singh. Gradient descent provably optimizes over-parameterized neural networks. In International Conference on Learning Representations , 2019. | | 377 378 | [17] | I. Dumer, M.S. Pinsker, and V.V. Prelov. On coverings of ellipsoids in euclidean spaces. IEEE Transactions on Information Theory , 50(10):2348-2356, 2004. | | 379 380 | [18] | Lawrence C Evans. Partial differential equations , volume 19. American Mathematical Society, 2022. | | 381 382 | [19] | Tomer Galanti, Zachary S Siegel, Aparna Gupte, and Tomaso Poggio. Sgd and weight decay provably induce a low-rank bias in neural networks. arXiv preprint arXiv:2206.05794 , 2022. | | 383 384 385 386 | [20] | Mario Geiger, Arthur Jacot, Stefano Spigler, Franck Gabriel, Levent Sagun, Stéphane d'Ascoli, Giulio Biroli, Clément Hongler, and Matthieu Wyart. Scaling description of generalization with number of parameters in deep learning. Journal of Statistical Mechanics: Theory and Experiment , 2020(2):023401, 2020. | | 387 388 389 | [21] | Mario Geiger, Stefano Spigler, Stéphane d'Ascoli, Levent Sagun, Marco Baity-Jesi, Giulio Biroli, and Matthieu Wyart. Jamming transition as a paradigm to understand the loss landscape of deep neural networks. Physical Review E , 100(1):012115, 2019. | |-----------------|--------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 390 391 392 393 | [22] | Matteo Giordano, Kolyan Ray, and Johannes Schmidt-Hieber. On the inability of gaussian process regression to optimally learn compositional functions. In Alice H. Oh, Alekh Agarwal, Danielle Belgrave, and Kyunghyun Cho, editors, Advances in Neural Information Processing Systems , 2022. | | 394 395 396 | [23] | Antoine Gonon, Nicolas Brisebarre, Elisa Riccietti, and Rémi Gribonval. A path-norm toolkit for modern networks: consequences, promises and challenges. In The Twelfth International Conference on Learning Representations , 2023. | | 397 398 399 400 | [24] | Suriya Gunasekar, Jason Lee, Daniel Soudry, and Nathan Srebro. Characterizing implicit bias in terms of optimization geometry. In Jennifer Dy and Andreas Krause, editors, Proceedings of the 35th International Conference on Machine Learning , volume 80 of Proceedings of Machine Learning Research , pages 1832-1841. PMLR, 10-15 Jul 2018. | | 401 402 | [25] | Daniel Hsu, Ziwei Ji, Matus Telgarsky, and Lan Wang. Generalization bounds via distillation. In International Conference on Learning Representations , 2021. | | 403 404 405 406 | [26] | Arthur Jacot. Bottleneck structure in learned features: Low-dimension vs regularity tradeoff. In A. Oh, T. Naumann, A. Globerson, K. Saenko, M. Hardt, and S. Levine, editors, Advances in Neural Information Processing Systems , volume 36, pages 23607-23629. Curran Associates, Inc., 2023. | | 407 408 | [27] | Arthur Jacot. Implicit bias of large depth networks: a notion of rank for nonlinear functions. In The Eleventh International Conference on Learning Representations , 2023. | | 409 410 411 | [28] | Arthur Jacot, Franck Gabriel, and Clément Hongler. Neural Tangent Kernel: Convergence and Generalization in Neural Networks. In Advances in Neural Information Processing Systems 31 , pages 8580-8589. Curran Associates, Inc., 2018. | | 412 413 | [29] | Yiding Jiang, Behnam Neyshabur, Hossein Mobahi, Dilip Krishnan, and Samy Bengio. Fantastic generalization measures and where to find them. arXiv preprint arXiv:1912.02178 , 2019. | | 414 415 416 | [30] | Zhiyuan Li, Yuping Luo, and Kaifeng Lyu. Towards resolving the implicit bias of gradient descent for matrix factorization: Greedy low-rank learning. In International Conference on Learning Representations , 2020. | | 417 418 | [31] | Stéphane Mallat. Group invariant scattering. Communications on Pure and Applied Mathematics , 65(10):1331-1398, 2012. | | 419 420 | [32] | Behnam Neyshabur, Ryota Tomioka, and Nathan Srebro. Norm-based capacity control in neural networks. In Conference on learning theory , pages 1376-1401. PMLR, 2015. | | 421 422 | [33] | Atsushi Nitanda and Taiji Suzuki. Optimal rates for averaged stochastic gradient descent under neural tangent kernel regime. In International Conference on Learning Representations , 2020. | | 423 424 | [34] | Greg Ongie and Rebecca Willett. The role of linear layers in nonlinear interpolating networks. arXiv preprint arXiv:2202.00856 , 2022. | | 425 426 427 | [35] | Leonardo Petrini, Francesco Cagnetta, Umberto MTomasini, Alessandro Favero, and Matthieu Wyart. How deep neural networks learn compositional data: The random hierarchy model. arXiv preprint arXiv:2307.02129 , 2023. | | 428 429 430 | [36] | Grant Rotskoff and Eric Vanden-Eijnden. Parameters as interacting particles: long time convergence and asymptotic error scaling of neural networks. In Advances in Neural Information Processing Systems 31 , pages 7146-7155. Curran Associates, Inc., 2018. | | 431 432 433 434 | [37] | Philip Schmidt, Attila Reiss, Robert Duerichen, Claus Marberger, and Kristof Van Laerhoven. Introducing wesad, a multimodal dataset for wearable stress and affect detection. In Proceedings of the 20th ACM International Conference on Multimodal Interaction , ICMI '18, page 400-408, New York, NY, USA, 2018. Association for Computing Machinery. | | 435 436 | [38] | Johannes Schmidt-Hieber. Deep relu network approximation of functions on a manifold. arXiv preprint arXiv:1908.00695 , 2019. | |-------------|--------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 437 438 | [39] | Johannes Schmidt-Hieber. Nonparametric regression using deep neural networks with ReLU activation function. The Annals of Statistics , 48(4):1875 - 1897, 2020. | | 439 440 | [40] | Mark Sellke. On size-independent sample complexity of relu networks. Information Processing Letters , page 106482, 2024. | | 441 442 443 | [41] | Taiji Suzuki, Hiroshi Abe, and Tomoaki Nishimura. Compression based bound for non- compressed network: unified generalization error analysis of large compressible deep neural network. In International Conference on Learning Representations , 2020. | | 444 445 | [42] | Zihan Wang and Arthur Jacot. Implicit bias of SGD in l 2 -regularized linear DNNs: One- way jumps from high to low rank. In The Twelfth International Conference on Learning Representations , 2024. | | 447 | [43] | Colin Wei and Tengyu Ma. Data-dependent sample complexity of deep neural networks via lipschitz augmentation. Advances in Neural Information Processing Systems , 32, 2019. | | 449 | [44] | E Weinan, Chao Ma, and Lei Wu. Barron spaces and the compositional function spaces for neural network models. arXiv preprint arXiv:1906.08039 , 2019. | | 451 452 | [45] | Yuxiao Wen and Arthur Jacot. Which frequencies do cnns need? emergent bottleneck structure in feature learning. to appear at ICML , 2024. | | 453 454 | [46] | Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, and Oriol Vinyals. Understanding deep learning requires rethinking generalization. ICLR 2017 proceedings , Feb 2017. | - The Appendix is structured as follows: 455 456 - 1. In Section A, we describe the experimental setup and provide a few additional experiments. - 457 458 459 460 - 2. In Section B, we prove Theorems 1 and 2 from the main. - 3. In Section C, we prove Proposition 3 and Theorem 4. - 4. In Section D, we prove Theorem 5 and other approximation results concerning Sobolev functions. 461 - 5. In Section E, we prove a few technical results on the covering number. ## A Experimental Setup 4 462 In this section, we review our numerical experiments and their setup both on synthetic and real-world 463 datasets in order to address theoretical results more clearly and intuitively. 464 ## A.1 Dataset 465 ## A.1.1 Emperical Dataset 466 - The Matérn kernel is considered a generalization of the radial basis function (RBF) kernel. It 467 controls the differentiability, or smoothness, of the kernel through the parameter ν . As ν →∞ , the 468 - Matérn kernel converges to the RBF kernel, and as ν → 0 , it converges to the Laplacian kernel, a 469 470 0-differentiable kernel. In this study, we utilized the Matérn kernel to generate Gaussian Process (GP) - samples based on the composition of two Matérn kernels, K g and K h , with varying differentiability 471 - in the range [0.5,10]×[0.5,10]. The input dimension ( d in ) of the kernel, the bottleneck mid-dimension 472 - ( d mid ), and the output dimension ( d out ) are 15, 3, and 20, respectively. 473 - This outlines the general procedure of our sampling method for synthetic data: 474 475 - 1. Sample the training dataset X ∈ R D d × in 476 477 - 2. From X, compute the D × D kernel K g with given ν g - 3. From K g , sample Z ∈ R D d × mid with columns sampled from the Gaussian N (0 , K g ) . 478 - 4. From Z , compute K g with given ν h - 5. From K h , sample the test dataset Y ∈ R D d × out with columns sampled from the Gaussian 479 N (0 , K h ) . 480 We utilized four AMD Opteron 6136 processors (2.4 GHz, 32 cores) and 128 GB of RAM to generate 481 our synthetic dataset. The maximum possible dataset size for 128 GB of RAM is approximately 482 52,500; however, we opted for a synthetic dataset size of 22,000 due to the computational expense 483 associated with sampling the Matérn kernel. This decision was made considering the time complexity 484 of O ( n 3 ) and the space complexity of O ( n 2 ) involved. Out of the 22,000 dataset points, 20,000 were 485 allocated for training data, and 2,000 were used for the test dataset 486 ## A.1.2 Real-world dataset: WESAD 487 In our study, we utilized the Wearable Stress and Affect Detection (WESAD) dataset to train our 488 AccNets for binary classification. The WESAD dataset, which is publicly accessible, provides 489 multimodal physiological and motion data collected from 15 subjects using devices worn on the wrist 490 and chest. For the purpose of our experiment, we specifically employed the Empatica E4 wrist device 491 to distinguish between non-stress (baseline) and stress conditions, simplifying the classification task 492 to these two categories. 493 - After preprocessing, the dataset comprised a total of 136,482 instances. We implemented a train-test 494 split ratio of approximately 75:25, resulting in 100,000 instances for the training set and 36,482 495 496 instances for the test set. The overall hyperparameters and architecture of the AccNets model applied 497 498 499 to the WESAD dataset were largely consistent with those used for our synthetic data. The primary differences were the use of 100 epochs for each iteration of Ni from Ns, and a learning rate set to 1e-5. 4 The code used for experiments are publicly available here <!-- image --> Figure 4: A comparison: singular values of the weight matrices for DNN and AccNets models. The first two plots represent cases where N = 10000 while the right two plots correspond to N = 200.The number of outliers at the top of each plot signifies the rank of each network. The plots with N = 10000 datasets demonstrate a clearer capture of the true rank compared to those with N = 200 indicating that a higher dataset count provides more accurate rank determination <!-- image --> <!-- image --> <!-- image --> . ## A.2 Model setups 500 To investigate the scaling law of test error for our synthetic data, we trained models using N i 501 datapoints from our training data, where N = [100 200 500 1000 2000 5000 10000 20000] , , , , , , , . The 502 models employed for this analysis included the kernel method, shallow networks, fully connected 503 deep neural networks (FC DNN), and AccNets. For FC DNN and AccNets, we configured the 504 network depth to 12 layers, with the layer widths set as [ d in , 500 500 , , ..., 500 , d out ] for DNNs, and 505 [ d n, i 900 100 900 , , , ..., 100 900 , , d out ] for AccNets. 506 - To ensure a comparable number of neurons, the width for the shallow networks was set to 50,000, 507 resulting in dimensions of [ d in , 50000 , d out ] . 508 We utilized ReLU as the activation function and L 1 -norm as the cost function, with the Adam 509 optimizer. The total number of batch was set to 5, and the training process was conducted over 3600 510 epochs, divided into three phases. The detailed optimizer parameters are as follows: 511 - 1. 512 - For the first 1200 epochs: learning rate ( lr ) = 1 5 . ∗ 0 001 . , weight decay = 0 513 - 2. For the second 1200 epochs: lr = 0 4 . ∗ 0 001 . , weight decay = 0 002 . 514 - 3. For the final 1200 epochs: lr = 0 1 . ∗ 0 001 . , weight decay = 0 005 . We conducted experiments utilizing 12 NVIDIA V100 GPUs (each with 32GB of memory) over a 515 period of 6.3 days to train the synthetic dataset. In contrast, training the WESAD dataset required 516 only one hour on a single V100 GPU. 517 ## A.3 Additional experiments 518 ## B AccNet Generalization Bounds 519 The proof of generalization for shallow networks (Theorem 1) is the special case L = 1 of the proof 520 of Theorem 2, so we only prove the second: 521 Theorem 6. Consider an accordion net of depth L and widths d L , . . . , d 0 , with corresponding set 522 of functions F = { f L :1 : ∥ f ℓ ∥ F 1 ≤ R , ℓ Lip ( f ℓ ) ≤ ρ ℓ } with input space Ω = B (0 , r ) . For any 523 ρ -Lipschitz loss function ℓ x, f ( ( x )) with | ℓ x, y ( ) | ≤ c 0 , we know that with probability 1 -δ over the 524 sampling of the training set X from the distribution π , we have for all f ∈ F 525 <!-- formula-not-decoded --> Proof. The strategy is: (1) prove a covering number bound on F (2) use it to obtain a Rademacher 526 complexity bound, (3) use the Rademacher complexity to bound the generalization error. 527 (1) We define f ℓ = V ℓ ◦ σ ◦ W ℓ so that f θ = f L :1 = f L ◦ · · · ◦ f 1 . First notice that we can write each 528 f ℓ as convex combination of its neurons: 529 <!-- formula-not-decoded --> for ¯ w ℓ,i = w ℓ,i ∥ w ℓ,i ∥ , ¯ v ℓ,i = v ℓ,i ∥ v ℓ,i ∥ , R ℓ = ∑ ℓ i =1 ∥ v ℓ,i ∥ ∥ w ℓ,i ∥ and c ℓ,i = 1 R ℓ ∥ v ℓ,i ∥ ∥ w ℓ,i ∥ . 530 Let us now consider a sequence ϵ k = 2 -k for k = 0 , . . . , K and define ˜ v ( k ) ℓ,i , w ˜ ( k ) ℓ,i to be the ϵ k -covers 531 of ¯ v ℓ,i , w ¯ ℓ,i , furthermore we may choose ˜ v (0) ℓ,i = ˜ w (0) ℓ,i = 0 since every unit vector is within a ϵ 0 = 1 532 distance of the origin. We will now show that on can approximate f θ by approximating each of the f ℓ 533 by functions of the form 534 <!-- formula-not-decoded --> for indices i ( k ) ℓ,m = 1 , . . . , w ℓ choosen adequately. Notice that the number of functions of this type 535 equals the number of M k,ℓ quadruples (˜ v ( k ) ℓ,i ( k ) ℓ,m , w ˜ ( k T ) ℓ,i ( k ) ℓ,m , v ˜ ( k -1) ℓ,i ( k ) ℓ,m , w ˜ ( k -1) T ℓ,i ( k ) ℓ,m ) where these vectors belong 536 to the ϵ k - resp. ϵ k -1 -coverings of the d in - resp. d out -dimensional unit sphere. Thus the number of 537 such functions is bounded by 538 <!-- formula-not-decoded --> and we have this choice for all ℓ = 1 , . . . , L . We will show that with sufficiently large M k,ℓ this set 539 of functions ϵ -covers F which then implies that 540 <!-- formula-not-decoded --> We will use the probabilistic method to find the right indices i ( k ) ℓ,m to approximate a function f ℓ = 541 R ℓ ∑ w ℓ i =1 c ℓ,i ¯ v ℓ,i σ w ( ¯ T ℓ,i x ) with a function ˜ f ℓ . We take all i ( k ) ℓ,m to be i.i.d. equal to the index i = 542 1 , · · · , w ℓ with probability c ℓ,i , so that in expectation 543 <!-- formula-not-decoded --> <!-- formula-not-decoded --> We will show that this expectation is O ϵ ( K ℓ ) -close to f ℓ and that the variance of ˜ f ℓ goes to zero as 544 the M ℓ,k s grow, allowing us to bound the expected error E ∥ ∥ ∥ f L :1 -˜ f L :1 ∥ ∥ ∥ 2 π ≤ ϵ 2 which then implies 545 that there must be at least one choice of indices i ( k ) ℓ,m such that ∥ ∥ ∥ f L :1 -˜ f L :1 ∥ ∥ ∥ π ≤ ϵ . 546 ## Let us first bound the distance 547 <!-- formula-not-decoded --> Then we bound the trace of the covariance of ˜ f ℓ which equals the expected square distance between 548 ˜ f ℓ and its expectation: 549 <!-- formula-not-decoded --> ## Putting both together, we obtain 550 <!-- formula-not-decoded --> We will now use this bound, together with the Lipschitzness of f ℓ to bound the error 551 E ∥ ∥ ∥ f L :1 ( x ) -˜ f L :1 ( x ) ∥ ∥ ∥ 2 . We will do this by induction on the distances E ∥ ∥ ∥ f ℓ :1 ( x ) -˜ f ℓ :1 ( x ) ∥ ∥ ∥ 2 . 552 We start by 553 <!-- formula-not-decoded --> And for the induction step, we condition on the layers f ℓ -1:1 554 <!-- formula-not-decoded --> Now since 555 <!-- formula-not-decoded --> we obtain that 556 <!-- formula-not-decoded --> We define ˜ ρ 2 ℓ = ρ 2 ℓ [ 1 + 4 R 2 ℓ ρ 2 ℓ ( ϵ 2 K ℓ +9 ∑ K ℓ k =1 ϵ 2 k M k,ℓ )] and obtain 557 <!-- formula-not-decoded --> Thus for any distribution π over the ball B (0 , r ) , there is a choice of indices i ( k ) ℓ,m such that 558 <!-- formula-not-decoded --> We now simply need to choose K ℓ and M k,ℓ adequately. To reach an error of 2 ϵ , we choose 559 <!-- formula-not-decoded --> where ρ L :1 = ∏ L ℓ =1 ρ ℓ . Notice that that ϵ 2 K ℓ ≤ 1 4 ρ 2 L :1 r 2 ( ∑ L ℓ ′ =1 R ℓ ′ ρ ℓ ′ √ d ℓ ′ + d ℓ ′ -1 ) ρ ℓ √ d ℓ + d ℓ -1 R ℓ ϵ 2 . 560 Given s 0 = ∑ ∞ k =1 √ k 2 -k ≈ 1 3473 . &lt; ∞ , we define 561 <!-- formula-not-decoded --> So that for all ℓ 562 <!-- formula-not-decoded --> Now this also implies that 563 <!-- formula-not-decoded --> and thus 564 <!-- formula-not-decoded --> Putting it all together, we obtain that 565 <!-- formula-not-decoded --> Now since log N 2 ( S d ℓ -1 , ϵ ) = d ℓ log ( 1 ϵ +1 ) and 566 <!-- formula-not-decoded --> we have 567 <!-- formula-not-decoded --> The diameter of F is smaller than ρ L :1 r , so for all δ ≥ ρ L :1 r , log N 2 ( F , δ ) = 0 . For all δ ≤ ρ L :1 r 568 we choose ϵ = δ √ 2 e so that √ 2 exp ( ϵ 2 ρ 2 L :1 r 2 ) ϵ ≤ δ and therefore 569 <!-- formula-not-decoded --> (2) Our goal now is to use chaining / Dudley's theorem to bound the Rademacher complexity 570 R ( F ( X )) evaluated on a set X of size N (e.g. Lemma 27.4 in [Understanding Machine Learning]) 571 of our set: 572 Lemma 7. Let c = max f ∈F √ 1 N ∥ f ( X ) ∥ , then for any integer M &gt; 0 , 573 <!-- formula-not-decoded --> To apply it to our setting, first note that for all x ∈ B (0 , r ) , ∥ f L :1 ( x ) ∥ ≤ ρ L :1 r so that c = 574 max f ∈F √ 1 N ∥ f ( X ) ∥ ≤ ρ L :1 r , we then have 575 <!-- formula-not-decoded --> <!-- formula-not-decoded --> Taking M = ⌈ -log 2 ( 72 √ N s 0 √ e ∑ L ℓ ′ =1 R ℓ ′ ρ ℓ ′ √ d ℓ ′ + d ℓ ′ -1 )⌉ , we obtain 576 <!-- formula-not-decoded --> <!-- formula-not-decoded --> <!-- formula-not-decoded --> <!-- formula-not-decoded --> (3) For any ρ -Lipschitz loss function ℓ x, f ( ( x )) with | ℓ x, y ( ) | ≤ c 0 , we know that with probability 577 1 -δ over the sampling of the training set X from the distribution π , we have for all f ∈ F 578 <!-- formula-not-decoded --> <!-- formula-not-decoded --> 579 ## C Composition of Sobolev Balls 580 Proposition 8 (Proposition 3 from the main.) . Given a distribution π with support in B (0 , r ) , we 581 have that with probability 1 -δ for all functions f ∈ F = { f : ∥ f ∥ W ν, 2 ≤ R, ∥ f ∥ ∞ ≤ R } 582 <!-- formula-not-decoded --> where E r ( N ) = N -1 2 if r &gt; 1 2 , E r ( N ) = N -1 2 log N if r = 1 2 , and E r ( N ) = N -r if r &lt; 1 2 . 583 Proof. (1) We know from Theorem 5.2 of [9] that the Sobolev ball B W ν, 2 (0 , R ) over any d -584 dimensional hypercube Ω satisfies 585 <!-- formula-not-decoded --> for a constant c and any measure π supported in the hypercube. 586 (2) By Dudley's theorem we can bound the Rademacher complexity of our function class B ( X ) 587 evaluated on any training set X : 588 <!-- formula-not-decoded --> If 2 ν = d , we take M = 1 2 log N and obtain the bound 589 <!-- formula-not-decoded --> If 2 ν &gt; d , we take M = ∞ and obtain the bound 590 <!-- formula-not-decoded --> If 2 ν &lt; d , we take M = ν d log N and obtain the bound 591 <!-- formula-not-decoded --> Putting it all together, we obtain that R ( B ( X )) ≤ C E 1 ν d / ( N ) . 592 (3) For any ρ -Lipschitz loss function ℓ x, f ( ( x )) with | ℓ x, y ( ) | ≤ c 0 , we know that with probability 593 1 -δ over the sampling of the training set X from the distribution π , we have for all f ∈ F 594 <!-- formula-not-decoded --> <!-- formula-not-decoded --> <!-- formula-not-decoded --> 595 Proposition 9. Let F 1 , . . . , F L be set of functions mapping through the sets Ω 0 , . . . , Ω L , then if all 596 functions in F ℓ are ρ ℓ -Lipschitz, we have 597 <!-- formula-not-decoded --> Proof. For any distribution π 0 on Ω there is a ϵ 1 -covering ˜ F 1 of F 1 with ∣ ∣ ∣ ˜ F 1 ∣ ∣ ∣ ≤ N 2 ( F 1 , ϵ 1 ) then 598 for any ˜ f 1 ∈ F ˜ 1 we choose a ϵ 2 -covering ˜ F 2 w.r.t. the measure π 1 which is the measure of f 1 ( x ) if 599 x ∼ π 0 of F 2 with ∣ ∣ ∣ ˜ F 2 ∣ ∣ ∣ ≤ N 2 ( F 2 , ϵ ) , and so on until we obtain coverings for all ℓ . Then the set 600 ˜ = F { ˜ f L ◦ · · · ◦ ˜ f 1 : ˜ f 1 ∈ F ˜ 1 , . . . , f ˜ L ∈ F ˜ L } is a ∑ L ℓ =1 ρ L ℓ : +1 ϵ ℓ -covering of F = F L ◦ · · · ◦ F 1 , 601 indeed for any f = f L :1 we choose ˜ f 1 ∈ F ˜ 1 , . . . , f ˜ L ∈ F ˜ L that cover f , . . . , f 1 L , then ˜ f L :1 covers 602 f L :1 : 603 <!-- formula-not-decoded --> and log cardinality of the set ˜ F is bounded ∑ L ℓ =1 log N 2 ( F ℓ , ϵ ℓ ) 604 . Theorem 10. Let F = F L ◦ · · · ◦ F 1 where F ℓ = 605 { f ℓ : R d ℓ -1 → R d ℓ s.t. ∥ f ℓ ∥ W ν ℓ , 2 ≤ R , ℓ ∥ f ℓ ∥ ∞ ≤ b , Lip f ℓ ( ℓ ) ≤ ρ ℓ } , and let r ∗ = min ℓ r ℓ 606 for r ℓ = ν ℓ d ℓ -1 , then with probability 1 -δ we have for all f ∈ F 607 <!-- formula-not-decoded --> where C ℓ depends only on d ℓ -1 , d ℓ , ν ℓ , b ℓ -1 . 608 Proof. (1) We know from Theorem 5.2 of [9] that the Sobolev ball B W ν ℓ , 2 (0 , R ℓ ) over any d ℓ -609 dimensional hypercube Ω satisfies 610 <!-- formula-not-decoded --> for a constant C ℓ that depends on the size of hypercube and the dimension d ℓ and the regularity ν ℓ 611 and any measure π ℓ -1 supported in the hypercube. 612 Thus Proposition 9 tells us that the composition of the Sobolev balls satisfies 613 <!-- formula-not-decoded --> Given r ∗ = min ℓ r ℓ , we can bound it by ∑ L ℓ =1 ( C ℓ R ℓ ϵ ℓ ) 1 r ∗ and by then choosing ϵ ℓ = 614 <!-- formula-not-decoded --> ℓ <!-- formula-not-decoded --> (2,3) It the follows by a similar argument as in points (2, 3) of the proof of Proposition 8 that there is 616 a constant C 0 such that with probability 1 -δ for all f ∈ F 617 <!-- formula-not-decoded --> <!-- formula-not-decoded --> 618 ## D Generalization at the Regularized Global Minimum 619 In this section, we first give the proof of Theorem 5 and then present detailed proofs of lemmas used 620 in the proof. The lemmas are largely inspired by [5] and may be of independent interest. 621 ## 622 ## D.1 Theorem 5 in Section 4.2 Theorem 11 (Theorem 5 in the main) . Given a true function f ∗ L ∗ :1 = f ∗ L ∗ ◦· · · ◦ f ∗ 1 going through the 623 dimensions d , . . . , d ∗ 0 ∗ L ∗ , along with a continuous input distribution π 0 supported in B (0 , b 0 ) , such 624 that the distributions π ℓ of f ∗ ℓ ( x ) (for x ∼ π 0 ) are continuous too and supported inside B (0 , b ℓ ) ⊂ 625 R d ∗ ℓ . Further assume that there are differentiabilities ν ℓ and radii R ℓ such that ∥ f ∗ ℓ ∥ W ν ℓ , 2 ( B (0 ,b ℓ )) ≤ 626 R ℓ , and ρ ℓ such that Lip f ( ∗ ℓ ) ≤ ρ ℓ . For a infinite width AccNet with L ≥ L ∗ and constant width 627 d ≥ d , . . . , d ∗ ∗ ∗ , we have for the ratios ˜ r ℓ = ν ℓ ∗ : 628 1 L -1 d ℓ +3 629 ↦ 630 - · At a global minimizer ˆ f L :1 of the regularized loss f , . . . , f 1 L →L ˜ N ( f L :1 )+ λR f , . . . , f ( 1 L ) , we have L ( ˆ f L :1 ) = ˜ ( O N -min { 1 2 ,r ˜ 1 ,...,r ˜ L ∗ } ) . 631 ↦ 632 - · At a global minimizer ˆ f L :1 of the regularized loss f , . . . , f 1 L →L ˜ N ( f L :1 )+ λ ∏ L ℓ =1 ∥ f ℓ ∥ F 1 , we have L ( ˆ f L :1 ) = ˜ ( O N -1 2 + ∑ L ∗ ℓ =1 max 0 ˜ { ,r ℓ - } 1 2 ) . Proof. If f ∗ = f ∗ L ∗ ◦ · · · ◦ f ∗ 1 with L ∗ ≤ L , intermediate dimensions d , . . . , d ∗ 0 ∗ L ∗ , along with a 633 continuous input distribution π 0 supported in B (0 , b 0 ) , such that the distributions π ℓ of f ∗ ℓ ( x ) (for 634 x ∼ π 0 ) are continuous too and supported inside B (0 , b ℓ ) ⊂ R d ∗ ℓ . Further assume that there are 635 differentiabilities ν ∗ ℓ and radii R ℓ such that ∥ f ∗ ℓ ∥ W ν ℓ ∗ , 2 ( B (0 ,b ℓ )) ≤ R ℓ . 636 We first focus on the L = L ∗ case and then extend to the L &gt; L ∗ case. 637 - Each f ∗ ℓ can be approximated by another function ˜ f ℓ with bounded F 1 -norm and Lipschitz constant. 638 Actually if 2 ν ∗ ℓ ≥ d ∗ ℓ -1 +3 one can choose ˜ f ℓ = f ∗ ℓ since ∥ f ∗ ℓ ∥ F 1 ≤ C R ℓ ℓ by Lemma 14, and by 639 assumption Lip f ( ˜ ) ℓ ≤ ρ ℓ . If 2 ν ∗ ℓ &lt; d ∗ ℓ -1 +3 , then by Lemma 13 we know that there is a ˜ f ℓ with 640 ∥ ∥ ∥ ˜ f ℓ ∥ ∥ ∥ ≤ C R ϵ ℓ ℓ -1 2˜ r ℓ +1 ℓ and Lip f ( ˜ ) ℓ ≤ C Lip f ℓ ( ∗ ℓ ) ≤ C ρ ℓ ℓ and error 641 - F 1 <!-- formula-not-decoded --> ## Therefore the composition ˜ f L :1 satisfies 642 <!-- formula-not-decoded --> For any L ≥ L ∗ , dimensions d ℓ ≥ d ∗ ℓ and widths w ℓ ≥ N , 643 ˜ f L :1 , by simply adding zero weights along the additional dimensions and widths, and by adding 644 identity layers if L &gt; L ∗ , since it is possible to represent the identity on R d with a shallow network 645 with 2 d neurons and F 1 -norm 2 d (by having two neurons e σ e i ( T i · ) and -e σ i ( -e T i · ) for each basis 646 e i ). Since the cost in parameter norm of representing the identity scales with the dimension, it is 647 best to add those identity layers at the minimal dimension min { d , . . . , d ∗ 0 ∗ L ∗ } . We therefore end up 648 with a AccNet with L -L ∗ identity layers (with F 1 norm 2 min { d , . . . , d ∗ 0 ∗ L ∗ } ) and L ∗ layers that 649 approximate each of the f ∗ with a bounded F -norm function ˜ f . 650 Since f ∗ L :1 has zero population loss, the population loss of the AccNet f L :1 is 651 ρ ∑ L ℓ =1 ρ L ℓ : +1 C L ℓ : +1 c ϵ ℓ ℓ . By McDiarmid's inequality, we know that with probability 1 -δ over the 652 sampling of the training set, the training loss is bounded by ρ ∑ L ρ L ℓ : +1 C L ℓ : +1 c ϵ ℓ ℓ + B √ 2 log 2 / δ 653 (1) The global minimizer f L :1 f L ◦ · · · ◦ f 1 of the regularized loss (with the first regularization 654 we can build an AccNet that fits eactly ℓ 1 ℓ ˜ bounded by ℓ =1 N . ˆ = ˆ ˆ term) is therefore bounded by 655 <!-- formula-not-decoded --> <!-- formula-not-decoded --> <!-- formula-not-decoded --> Taking ϵ ℓ = E ˜ r min ( N ) and λ = N -1 2 log N , this is upper bounded by 656 <!-- formula-not-decoded --> which implies that at the globla minimizer of the regularized loss, the (unregularized) train loss is of 657 order E ˜ r min ( N ) and the complexity measure R f , . . . , f ( ˆ 1 ˆ ) L is of order 1 N E ˜ r min ( N ) which implies 658 that the test error will be of order 659 <!-- formula-not-decoded --> (2) Let us now consider adding the closer to traditional L 2 -regularization L λ ( f L :1 ) = L ( f L :1 ) + 660 λ ∏ L ℓ =1 ∥ f ℓ ∥ F 1 . ,We see that the global minimizer ˆ f L :1 of the L 2 -regularized loss is upper bounded 661 by 662 <!-- formula-not-decoded --> Which for ϵ ℓ = E ˜ r min ( N ) and λ = N -1 N is upper bounded by 663 <!-- formula-not-decoded --> Which implies that both the train error is of order N -1 2 ∏ L ∗ ℓ =1 √ NE ˜ r min ( N ) and the product of the 664 F 1 -norms is of order ∏ L ∗ ℓ =1 √ NE ˜ r min ( N ) . 665 Now note that the product of the F 1 -norms bounds the complexity measure up to a constant since 666 Lip f ( ) ≤ ∥ f ∥ F 1 667 <!-- formula-not-decoded --> And since at the global minimum the product of the F 1 -norms is of order ∏ L ∗ ℓ =1 √ NE ˜ r min ( N ) the 668 test error will of order ( ∏ L ∗ ℓ =1 √ NE ˜ r ℓ ( N ) ) log N √ N . 669 Note that if there is at a most one ℓ where ˜ r ℓ &gt; 1 2 then the rate is up to log term the same as 670 E ˜ r min ( N ) . 671 ## D.2 Lemmas on approximating Sobolev functions 672 - Now we present the lemmas used in this proof above that concern the approximation errors and 673 Lipschitz constants of Sobolev functions and compositions of them. We will bound the F 2 -norm and 674 note that the F 2 -norm is larger than the F 1 -norm, cf. [5, Section 3.1]. 675 Lemma 12 (Approximation for Sobolev function with bounded error and Lipschitz constant) . 676 Suppose g : S d → R is an even function with bounded Sobolev norm ∥ g ∥ 2 W ν, 2 ≤ R with 2 ν ≤ d +2 , 677 with inputs on the unit d -dimensional sphere. Then for every ϵ &gt; 0 , there is ˆ g ∈ G 2 with small 678 approximation error ∥ g - ∥ ˆ g L 2 ( S d ) = C d, ν, R ϵ ( ) , bounded Lipschitzness Lip(ˆ) g ≤ C ′ ( d )Lip( g ) , 679 and bounded norm 680 <!-- formula-not-decoded --> Proof. Given our assumptions on the target function g , we may decompose g x ( ) = ∑ ∞ k =0 g k ( x ) 681 along the basis of spherical harmonics with g 0 ( x ) = ∫ S d g y ( )d τ d ( y ) being the mean of g x ( ) over the 682 uniform distribution τ d over S d . The k -th component can be written as 683 <!-- formula-not-decoded --> with N d,k ( ) = 2 k + d -1 k ( k + d -2 d -1 ) and a Gegenbauer polynomial of degree k and dimension d +1 : 684 <!-- formula-not-decoded --> known as Rodrigues' formula. Given the assumption that the Sobolev norm ∥ g ∥ 2 W ν, 2 is upper 685 bounded, we have ∥ f ∥ 2 L 2 ( S d ) ≤ C 0 ( d, ν ) R for f = ∆ ν/ 2 g where ∆ is the Laplacian on S d [18, 5]. 686 Note that g k are eigenfunctions of the Laplacian with eigenvalues k k ( + d -1) [4], thus 687 <!-- formula-not-decoded --> where the last inequality holds because ∥ f ∥ 2 L 2 ( S d ) = ∑ k ≥ 0 ∥ f k ∥ 2 L 2 ( S d ) converges. Note using the 688 Hecke-Funk formula, we can also write g k as scaled p k for the underlying density p of the F 1 and 689 F 2 -norms: 690 <!-- formula-not-decoded --> where λ k = ω d -1 ω d ∫ 1 -1 σ t P ( ) k ( )(1 t -t 2 ) ( d -2) / 2 d t = Ω( k -( d +3) 2 / ) [5, Appendix D.2] and ω d 691 denotes the surface area of S d . Then by definition of ∥ · ∥ F 2 , for some probability density p , 692 <!-- formula-not-decoded --> Now to approximate g , consider function ˆ g defined by truncating the 'high frequencies' of g , i.e. 693 setting ˆ g k = ✶ [ k ≤ mg ] k for some m&gt; 0 we specify later. Then we can bound the norm with 694 ̸ ̸ <!-- formula-not-decoded --> <!-- formula-not-decoded --> <!-- formula-not-decoded --> where (a) uses Eq 1 and λ k = Ω( k -( d +3) 2 / ) ; (b) approximates by integral. 695 To bound the approximation error, 696 <!-- formula-not-decoded --> ≤ C 5 ( d, ν, R ) m by integral approximation. <!-- formula-not-decoded --> Finally, choosing m = ϵ -1 ν , we obtain ∥ g - ∥ ˆ g L 2 ( S d ) ≤ C d, ν, R ϵ ( ) and 697 <!-- formula-not-decoded --> Then it remains to bound Lip(ˆ) g for our constructed approximation. By construction and by [13, 698 Theorem 2.1.3], we have ˆ = g g ∗ h with now 699 <!-- formula-not-decoded --> by orthogonality of the Gegenbauer polynomial P k 's and the convolution is defined as 700 <!-- formula-not-decoded --> The coefficients for 0 ≤ k ≤ m given by [13, Theorem 2.1.3] are 701 <!-- formula-not-decoded --> where (a) follows from the (inverse of) weighted L 2 norm of P k ; (b) plugs in the unit constant 702 P k (1) = Γ( k + d -1) Γ( d -1) k ! and suppresses the dependence on d . Note that the constant factor Γ( d -1) Γ( d -1+ ) k 703 comes from the difference in the definitions of the Gegenbauer polynomials here and in [13]. Then 704 we can bound 705 <!-- formula-not-decoded --> <!-- formula-not-decoded --> <!-- formula-not-decoded --> <!-- formula-not-decoded --> <!-- formula-not-decoded --> <!-- formula-not-decoded --> <!-- formula-not-decoded --> by orthogonality of P k 's w.r.t. this measure <!-- formula-not-decoded --> <!-- formula-not-decoded --> for some constant C d ( ) that only depends on d . Hence Lip(ˆ) = g C ′ ( d )Lip( g ) . 706 The next lemma adapts Lemma 12 to inputs on balls instead of spheres following the construction in 707 [5, Proposition 5]. 708 Lemma 13. Suppose f : B (0 , b ) → R has bounded Sobolev norm ∥ f ∥ 2 W ν, 2 ≤ R with ν ≤ ( d +2) 2 / 709 even, where B (0 , b ) = { x ∈ R d : ∥ x ∥ 2 ≤ } b is the radiusb ball. Then for every ϵ &gt; 0 there exists 710 f ϵ ∈ F 2 such that ∥ f -f ϵ ∥ L 2 ( B (0 ,b )) = C d, ν, b, R ( ) ϵ , Lip( f ϵ ) ≤ C ′ ( b, d )Lip( f ) , and 711 <!-- formula-not-decoded --> Proof. Define g z, a ( ) = f ( 2 bz a ) a on ( z, a ) ∈ S d with z ∈ R d and 1 √ 2 ≤ a ∈ R . One may 712 verify that unit-norm ( z, a ) with a ≥ 1 √ 2 is sufficient to cover B (0 , b ) by setting x = bz a and 713 solve for ( z, a ) . Then we have bounded ∥ g ∥ 2 W ν, 2 ≤ b ν R and may apply Lemma 12 to get ˆ g with 714 ∥ g - ∥ ˆ g L 2 ( S d ) ≤ C d, ν, b, R ( ) ϵ. Letting f ϵ ( x ) = ˆ g ( ax b , a ) a -1 for the corresponding ( ax b , a ) ∈ S d 715 gives the desired upper bounds. 716 Lemma 14. Suppose f : B (0 , b ) → R has bounded Sobolev norm ∥ f ∥ 2 W ν, 2 ≤ R with ν ≥ ( d +3) 2 / 717 even. Then f ∈ F 2 and ∥ f ∥ F 2 ≤ C d, ν ( ) b ν R . 718 In particular, W ν, 2 ⊆ F 2 for ν ≥ ( d +3) 2 / even. 719 Proof. This lemma reproduces [5, Proposition 5] to functions with bounded Sobolev L 2 norm instead 720 of L ∞ norm. The proof follows that of Lemma 12 and Lemma 13 and noticing that by Eq 1, 721 <!-- formula-not-decoded --> ̸ <!-- formula-not-decoded --> 722 Finally, we remark that the above lemmas extend straightforward to functions f : B (0 , b ) → R d ′ 723 with multi-dimensional outputs, where the constants then depend on the output dimension d ′ too. 724 ## D.3 Lemma on approximating compositions of Sobolev functions 725 With the lemmas given above and the fact that the F 2 -norm upper bounds the F 1 -norm, we can find 726 infinite-width DNN approximations for compositions of Sobolev functions, which is also pointed out 727 in the proof of Theorem 5. 728 Lemma 15. Assume the target function f : Ω → R d out , with Ω ⊆ B (0 , b ) ⊆ R d in , satisfies: 729 730 731 - · f = g k ◦ · · · ◦ g 1 a composition of k Sobolev functions g i : R d i → R d i +1 with bounded norms ∥ g i ∥ 2 W ν i , 2 ≤ R for i = 1 , . . . , k , with d 1 = d in ; 732 - · f is Lipschitz, i.e. Lip( g i ) &lt; ∞ for i = 1 , . . . , k . If ν i ≤ ( d i +2) 2 / for any i , i.e. less smooth than needed, for depth L ≥ k and any ϵ &gt; 0 , there is an 733 infinite-width DNN ˜ f such that 734 735 - · Lip( ˜ ) f ≤ C 1 ∏ k i =1 Lip( g i ) ; - · ∥ ˜ f -f ∥ ≤ C ϵ - L 2 2 ; 736 the constants C 1 depends on all of the input dimensions d i (to g i ) and d out , and C 2 depends on 737 d , d i out , ν i , b, R, k , and Lip( g i ) for all i . 738 If otherwise ν i ≥ ( d i +3) 2 / for all i , we can have ˜ = f f where each layer has a parameter norm 739 bounded by C R 3 , with C 3 depending on d , d i out , ν i , and b . 740 ## Proof. Note that by Lipschitzness, 741 <!-- formula-not-decoded --> i.e. the pre-image of each component lies in a ball. By Lemma 12, for each g i , if ν i ≤ ( d i +2) 2 / , 742 we have an approximation ˆ g i on a slightly larger ball b ′ i = b ∏ i -1 j =1 C ′′ ( d , d j j +1 )Lip( g j ) such that 743 744 745 - · ∥ g i -ˆ g i ∥ L 2 ≤ C d , d ( i i +1 , ν i , b ′ i , R ϵ ) ; - · ∥ ˆ g i ∥ F 2 ≤ C ′ ( d , d i i +1 , ν i , b ′ i , R ϵ ) d i +3 -2 ν i 2 ν i ; 746 - · Lip(ˆ ) g i ≤ C ′′ ( d , d i i +1 )Lip( g i ) ; - where d i is the input dimension of g i . Write the constants as C i , C ′ i , and C ′′ i for notation simplicity. 747 - Note that the Lipschitzness of the approximations ˆ g i 's guarantees that, when they are composed, 748 - (ˆ g i -1 ◦ · · · ◦ ˆ )(Ω) g 1 lies in a ball of radius b ′ i = b ∏ i -1 j =1 C ′′ j Lip( g j ) , hence the approximation error 749 - remains bounded while propagating. While each ˆ g i is a (infinite-width) layer, for the other L -k 750 layers, we may have identity layers . 5 751 Let ˜ f be the composed DNN of these layers. Then we have 752 <!-- formula-not-decoded --> and approximation error 753 <!-- formula-not-decoded --> where the last equality suppresses the dependence on d , d i out , ν i , b, R, k , and Lip( g i ) for i = 754 1 , . . . , k . 755 In particular, by Lemma 14, if ν i ≥ ( d i +3) 2 / for any i = 1 , . . . , k , we can take ˆ g i = g i . If this 756 holds for all i , then we can have ˜ = f f while each layer has a F 2 -norm bounded by O R ( ) . 757 ## E Technical results 758 Here we show a number of technical results regarding the covering number. 759 First, here is a bound for the covering number of Ellipsoids, which is a simple reformulation of 760 Theorem 2 of [17]: 761 Theorem 16. The d -dimensional ellipsoid E = { x : x T K -1 x ≤ } 1 with radii √ λ i for λ i the i -th 762 eigenvalue of K satisfies log N 2 ( E,ϵ ) = M ϵ (1 + o (1)) for 763 <!-- formula-not-decoded --> if one has log √ λ 1 ϵ = o ( M 2 ϵ k ϵ log d ) for k ϵ = ∣{ ∣ i : √ λ i ≥ ϵ }∣ ∣ 764 For our purpose, we will want to cover a unit ball B = { w : ∥ w ∥ ≤ 1 } w.r.t. to a non-isotropic norm 765 ∥ w ∥ 2 K = w Kw T , but this is equivalent to covering E with an isotropic norm: 766 Corollary 17. The covering number of the ball B = { w : ∥ w ∥ ≤ 1 } w.r.t. the norm ∥ w ∥ 2 K = w Kw T 767 satisfies log N ( B, ∥·∥ K , ϵ ) = M ϵ (1 + o (1)) for the same M ϵ as in Theorem 16 and under the same 768 condition. 769 <!-- formula-not-decoded --> Proof. If ˜ E is an ϵ -covering of E w.r.t. to the L 2 -norm, then ˜ B = K -1 2 ˜ E is an ϵ -covering of B 771 w.r.t. the norm ∥·∥ K , because if w ∈ B , then √ Kw ∈ E and so there is an ˜ x ∈ ˜ E such that 772 ∥ ∥ ∥ x -√ Kw ∥ ∥ ∥ ≤ ϵ , but then ˜ = w √ K -1 x covers w since ∥ ˜ w -w ∥ K = ∥ ∥ ∥ x -√ Kw ∥ ∥ ∥ K ≤ ϵ . 773 5 Since the domain is always bounded here, one can let the bias translate the domain to the first quadrant and let the weight be the identity matrix, cf. the construction in [45, Proposition B.1.3]. Since λ i ≤ Tr K i , we have K ≤ ¯ K for ¯ K the matrix obtained by replacing the i -th eigenvalue λ i of 774 K by Tr K i , and therefore N ( B, ∥·∥ K , ϵ ) ≤ N ( B, ∥·∥ ¯ K , ϵ ) since ∥·∥ K ≤ ∥·∥ ¯ K . We now have the 775 a[proximation log N ( B, ∥·∥ ¯ K , ϵ ) = ¯ M ϵ (1 + o (1)) for 776 <!-- formula-not-decoded --> We now have the simplification 777 <!-- formula-not-decoded --> where the o (1) term vanishes as ϵ ↘ 0 . Furthermore, this allows us to check that as long as 778 log d = o ( √ Tr K 4 ϵ log √ Tr K ϵ ) , the condition is satisfied 779 <!-- formula-not-decoded --> 780 Second we prove how to obtain the covering number of the convex hull of a function set F : 781 K Theorem 18. Let F be a set of B -uniformly bounded functions, then for all ϵ K = B 2 -782 <!-- formula-not-decoded --> Proof. Define ϵ k = B 2 -k and the corresponding ϵ k -coverings ˜ F k (w.r.t. some measure π ). For any 783 f , we write ˜ [ f k f ] for the function ˜ [ f k f ] ∈ F ˜ k that covers f . Then for any functions f in Conv F , we 784 have 785 <!-- formula-not-decoded --> We may assume that ˜ [ f 0 f i ] = 0 since the zero function ϵ 0 -covers the whole F since ϵ 0 = B . 786 We will now use the probabilistic method to show that the sums ∑ m i =1 β i ( ˜ [ f k f i ] -˜ f k -1 [ f i ] ) 787 can be approximated by finite averages. Consider the random functions ˜ g ( k ) 1 , . . . , g ˜ ( k ) m k 788 sampled iid with P [ ˜ g ( k ) j ] = ( ˜ [ f k f i ] -˜ f k -1 [ f i ] ) with probability β i . We have E [˜ g ( k ) j ] = 789 ∑ m i =1 β i ( ˜ [ f k f i ] -˜ f k -1 [ f i ] ) and 790 <!-- formula-not-decoded --> <!-- formula-not-decoded --> Thus if we take m k = 1 a k ( 3 ϵ k ϵ K ) 2 with ∑ a k = 1 we know that there must exist a choice of ˜ g ( k ) j s such 791 that 792 <!-- formula-not-decoded --> This implies that finite the set ˜ = C { ∑ K k =1 1 m k ∑ m k j =1 ˜ g ( k ) j : ˜ g ( k ) j ∈ F ˜ k -F ˜ k -1 } is an 2 ϵ K covering 793 of C = Conv F , since we know that for all f = ∑ m i =1 β f i i there are ˜ g ( k ) j such that 794 <!-- formula-not-decoded --> <!-- formula-not-decoded --> <!-- formula-not-decoded --> Since ∣ ∣ ∣ ˜ C ∣ ∣ ∣ = ∏ K k =1 ∣ ∣ ∣ ˜ F k ∣ ∣ ∣ m k ∣ ∣ ∣ ˜ F k -1 ∣ ∣ ∣ m k , we have 795 <!-- formula-not-decoded --> <!-- formula-not-decoded --> This is minimized for the choice 796 <!-- formula-not-decoded --> which yields the bound 797 <!-- formula-not-decoded --> 798 ## NeurIPS Paper Checklist 799 ## 1. Claims 800 801 802 Question: Do the main claims made in the abstract and introduction accurately reflect the paper's contributions and scope? 803 804 805 Answer: [Yes] Justification: The contribution section accurately describes our contributions, and all theorems are proven in the appendix. ## Guidelines: 806 807 808 - · The answer NA means that the abstract and introduction do not include the claims made in the paper. 809 810 811 812 813 - · The abstract and/or introduction should clearly state the claims made, including the contributions made in the paper and important assumptions and limitations. A No or NA answer to this question will not be perceived well by the reviewers. - · The claims made should match theoretical and experimental results, and reflect how much the results can be expected to generalize to other settings. 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 - · It is fine to include aspirational goals as motivation as long as it is clear that these goals are not attained by the paper. ## 2. Limitations Question: Does the paper discuss the limitations of the work performed by the authors? Answer: [Yes] Justification: We discuss limitations of our Theorems after we state them. ## Guidelines: - · The answer NA means that the paper has no limitation while the answer No means that the paper has limitations, but those are not discussed in the paper. - · The authors are encouraged to create a separate "Limitations" section in their paper. - · The paper should point out any strong assumptions and how robust the results are to violations of these assumptions (e.g., independence assumptions, noiseless settings, model well-specification, asymptotic approximations only holding locally). The authors should reflect on how these assumptions might be violated in practice and what the implications would be. - · The authors should reflect on the scope of the claims made, e.g., if the approach was only tested on a few datasets or with a few runs. In general, empirical results often depend on implicit assumptions, which should be articulated. - · The authors should reflect on the factors that influence the performance of the approach. For example, a facial recognition algorithm may perform poorly when image resolution is low or images are taken in low lighting. Or a speech-to-text system might not be used reliably to provide closed captions for online lectures because it fails to handle technical jargon. - · The authors should discuss the computational efficiency of the proposed algorithms and how they scale with dataset size. - · If applicable, the authors should discuss possible limitations of their approach to address problems of privacy and fairness. - · While the authors might fear that complete honesty about limitations might be used by reviewers as grounds for rejection, a worse outcome might be that reviewers discover limitations that aren't acknowledged in the paper. The authors should use their best judgment and recognize that individual actions in favor of transparency play an important role in developing norms that preserve the integrity of the community. Reviewers will be specifically instructed to not penalize honesty concerning limitations. ## 3. Theory Assumptions and Proofs Question: For each theoretical result, does the paper provide the full set of assumptions and a complete (and correct) proof? Answer: [Yes] Justification: All assumptions are either stated in the Theorem statements, except for a few recurring assumptions that are stated in the setup section. ## Guidelines: - · The answer NA means that the paper does not include theoretical results. - · All the theorems, formulas, and proofs in the paper should be numbered and crossreferenced. - · All assumptions should be clearly stated or referenced in the statement of any theorems. - · The proofs can either appear in the main paper or the supplemental material, but if they appear in the supplemental material, the authors are encouraged to provide a short proof sketch to provide intuition. - · Inversely, any informal proof provided in the core of the paper should be complemented by formal proofs provided in appendix or supplemental material. - · Theorems and Lemmas that the proof relies upon should be properly referenced. ## 4. Experimental Result Reproducibility | Question: Does the paper fully 865 main experimental results of the 866 | | |--------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Answer: [Yes] 868 | | | Justification: The experimental 869 | | | Guidelines: 870 | | | either be a way to access the model (e.g., with the dataset). (d) We recognize that reproducibility | | | 902 | | | to make their results reproducible 876 | | | • Depending on the contribution, 877 | | | For example, if the contribution 878 | | | might suffice, or if the contribution 879 be necessary to either make 880 dataset, or provide access 881 | | | of a large language model), 884 appropriate to the research 885 • While NeurIPS does not 886 submissions to provide some 887 | | | on the nature of the contribution. 888 (a) If the contribution is primarily 889 to reproduce that algorithm. | | | 890 (b) If the contribution is primarily 891 the architecture clearly 892 (c) If the contribution is a 893 894 | | | 895 896 897 898 899 | | | In the case of closed-source some way (e.g., to registered 900 to have some path to reproducing 901 | | | 903 904 | | | including code, unless this 915 | | | 914 | | | 912 913 | | | 911 | | | 908 | | | 909 910 | | | 907 | | | Justification: We use openly build this synthetic data. | Justification: We use openly build this synthetic data. | | available data or synthetic data, with | available data or synthetic data, with | | a description of how | a description of how | | Guidelines: | Guidelines: | | • The answer NA means that paper does not include experiments requiring code. | • The answer NA means that paper does not include experiments requiring code. | | to | to | | • Please see the NeurIPS code and data submission guidelines ( public/guides/CodeSubmissionPolicy ) for more details. | • Please see the NeurIPS code and data submission guidelines ( public/guides/CodeSubmissionPolicy ) for more details. | | • While we encourage the release of code and data, we understand that this might not possible, so 'No' is an acceptable answer. Papers cannot be rejected simply for not | • While we encourage the release of code and data, we understand that this might not possible, so 'No' is an acceptable answer. Papers cannot be rejected simply for not | | 906 | | | 905 | | | supplemental material? Answer: [Yes] | supplemental material? Answer: [Yes] | | be | be | | https://nips.cc/ | https://nips.cc/ | | Question: Does the paper instructions to faithfully | | | provide reproduce | provide reproduce | | open access to the data and code, with the main experimental results, as described | open access to the data and code, with the main experimental results, as described | | in | in | | sufficient | sufficient | | 5. Open access to data and code | 5. Open access to data and code | 920 921 922 923 924 925 926 927 928 929 930 931 932 933 934 935 936 937 938 939 940 941 942 943 944 945 946 947 - · The authors should provide instructions on data access and preparation, including how to access the raw data, preprocessed data, intermediate data, and generated data, etc. - · The authors should provide scripts to reproduce all experimental results for the new proposed method and baselines. If only a subset of experiments are reproducible, they should state which ones are omitted from the script and why. - · At submission time, to preserve anonymity, the authors should release anonymized versions (if applicable). - · Providing as much information as possible in supplemental material (appended to the paper) is recommended, but including URLs to data and code is permitted. ## 6. Experimental Setting/Details Question: Does the paper specify all the training and test details (e.g., data splits, hyperparameters, how they were chosen, type of optimizer, etc.) necessary to understand the results? Answer: [Yes] Justification: In the experimental setup section in the Appendix. ## Guidelines: - · The answer NA means that the paper does not include experiments. - · The experimental setting should be presented in the core of the paper to a level of detail that is necessary to appreciate the results and make sense of them. - · The full details can be provided either with the code, in appendix, or as supplemental material. ## 7. Experiment Statistical Significance Question: Does the paper report error bars suitably and correctly defined or other appropriate information about the statistical significance of the experiments? ## Answer: [No] Justification: The numerical experiments are mostly there as a visualization of the theoretical results, our main goal is therefore clarity, which would be hurt by putting error bars everywhere. ## Guidelines: 948 949 - · The answer NA means that the paper does not include experiments. 950 951 952 953 954 955 956 957 958 959 960 961 962 963 964 965 966 967 968 969 970 971 972 - · The authors should answer "Yes" if the results are accompanied by error bars, confidence intervals, or statistical significance tests, at least for the experiments that support the main claims of the paper. - · The factors of variability that the error bars are capturing should be clearly stated (for example, train/test split, initialization, random drawing of some parameter, or overall run with given experimental conditions). - · The method for calculating the error bars should be explained (closed form formula, call to a library function, bootstrap, etc.) - · The assumptions made should be given (e.g., Normally distributed errors). - · It should be clear whether the error bar is the standard deviation or the standard error of the mean. - · It is OK to report 1-sigma error bars, but one should state it. The authors should preferably report a 2-sigma error bar than state that they have a 96% CI, if the hypothesis of Normality of errors is not verified. - · For asymmetric distributions, the authors should be careful not to show in tables or figures symmetric error bars that would yield results that are out of range (e.g. negative error rates). - · If error bars are reported in tables or plots, The authors should explain in the text how they were calculated and reference the corresponding figures or tables in the text. ## 8. Experiments Compute Resources Question: For each experiment, does the paper provide sufficient information on the computer resources (type of compute workers, memory, time of execution) needed to reproduce the experiments? | 973 | Answer: [Yes] | |-----------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 974 | Justification: In the experimental setup section of the Appendix. | | 975 | Guidelines: | | 976 | • The answer NA means that the paper does not include experiments. | | 977 | • The paper should indicate the type of compute workers CPU or GPU, internal cluster, or cloud provider, including relevant memory and storage. | | 979 | • The paper should provide the amount of compute required for each of the individual experimental runs as well as estimate the total compute. | | 981 | • The paper should disclose whether the full research project required more compute than the experiments reported in the paper (e.g., preliminary or failed experiments that didn't make it into the paper). | | 9. Code Of Ethics 984 | Question: Does the research conducted in the paper conform, in every respect, with the NeurIPS Code of Ethics https://neurips.cc/public/EthicsGuidelines ? | | 987 | Answer: [Yes] | | 988 | Justification: We have read the Code of Ethics and see no issue. | | 989 | Guidelines: | | | • The answer NA means that the authors have not reviewed the NeurIPS Code of Ethics. | | | • If the authors answer No, they should explain the special circumstances that require a deviation from the Code of Ethics. | | 993 994 | • The authors should make sure to preserve anonymity (e.g., if there is a special consideration due to laws or regulations in their jurisdiction). | | 10. 995 | Broader Impacts | | 996 997 | Question: Does the paper discuss both potential positive societal impacts and negative societal impacts of the work performed? | | 998 | Answer: [NA] | | 999 | Justification: The paper is theoretical in nature, so it has no direct societal impact that can be meaningfully discussed. | | 1001 | Guidelines: | | 1002 | • The answer NA means that there is no societal impact of the work performed. • If the authors answer NA or No, they should explain why their work has no societal | | 1003 1004 1005 | impact or why the paper does not address societal impact. • Examples of negative societal impacts include potential malicious or unintended uses (e.g., disinformation, generating fake profiles, surveillance), fairness considerations (e.g., deployment of technologies that could make decisions that unfairly impact specific groups), privacy considerations, and security considerations. | | 1006 1007 1008 | • The conference expects that many papers will be foundational research and not | | 1009 | tied to particular applications, let alone deployments. However, if there is a direct path to any negative applications, the authors should point it out. For example, it is to point out that an improvement in the quality of generative models could be used generate deepfakes for disinformation. On the other hand, it is not needed to point | | 1010 1011 1012 | that a generic algorithm for optimizing neural networks could enable people to | | 1013 1014 1015 | legitimate to out train models that generate Deepfakes faster. | | 1016 1017 | • The authors should consider possible harms that could arise when the technology being used as intended and functioning correctly, harms that could arise when technology is being used as intended but gives incorrect results, and harms | | | is the following | | 1018 | from (intentional or unintentional) misuse of the technology. | | | If there are negative societal impacts, the authors could also discuss | | | possible strategies (e.g., gated release of models, providing defenses in addition to mechanisms for monitoring misuse, mechanisms to monitor how a system learns feedback over time, improving the efficiency and accessibility of ML). | | 1023 | | | 1022 | | | • 1020 | | | mitigation attacks, | mitigation attacks, | | 1021 | | | | from | | 1019 | | ## 11. 1024 ## Safeguards | 1025 1026 1027 1031 1032 1033 1034 1035 1036 1037 1038 1039 | Question: Does the paper describe safeguards that have been put in place for responsible release of data or models that have a high risk for misuse (e.g., pretrained language models, image generators, or scraped datasets)? Answer: [NA] Not relevant to our paper. Guidelines: • The answer NA means that the paper poses no such risks. • Released models that have a high risk for misuse or dual-use should be released with necessary safeguards to allow for controlled use of the model, for example by requiring that users adhere to usage guidelines or restrictions to access the model or implementing safety filters. • Datasets that have been scraped from the Internet could pose safety risks. The authors should describe how they avoided releasing unsafe images. • We recognize that providing effective safeguards is challenging, and many papers do not require this, but we encourage authors to take this into account and make a best | |---------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 1028 | | | 1029 | Justification: | | 1030 | | | 13. | New Assets Question: Are new assets provided alongside the | | | introduced assets? | | | in the paper well documented and is the Answer: [NA] | | | • The answer NA means that the paper does not release new assets. • Researchers should communicate the details of the dataset/code/model as part of submissions via structured templates. This includes details about training, | | | their license, limitations, etc. • The paper should discuss whether and how consent was obtained from people | | | asset is used. • At submission time, remember to anonymize your assets (if applicable). You can create an anonymized URL or include an anonymized zip file. | | 1076 | the derived asset (if it has changed) should be provided. • If this information is not available online, the authors are encouraged to reach out to the asset's creators. documentation We do not release any new assets. | | 1058 1059 1060 1061 | license of a dataset. • For existing datasets that are re-packaged, both the original license and the license of | | 1072 1073 1074 1075 | | | 1066 | | | 1062 1063 1064 1065 | | | 1067 1068 | | | 1069 1070 | Justification: Guidelines: | | 1071 | | | | either | | | whose | | 1077 | 14. Crowdsourcing and Research with Human Subjects | | | | | | |-----------------------------------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------------------------------------|---------------|-----|----------------|------| | 1078 1079 | Question: For crowdsourcing experiments and research with human subjects, does the paper include the full text of instructions given to participants and screenshots, if applicable, as well as details about compensation (if any)? | | | | 1080 | | | 1081 | Answer: [NA] | | | | | | | 1082 | Justification: Not relevant to this paper. | | | | | | | | Guidelines: | | | | 1083 | | | that the paper does not involve crowdsourcing nor research with 1084 1085 information in the supplemental material is fine, the main 1086 1087 1088 | but if contribution of the paper involves human subjects, then as much detail as possible should be included in the main paper. • According to the NeurIPS Code of Ethics, workers involved in data collection, curation, or other labor should be paid at least the minimum wage in the country of the data collector. | answer NA means subjects. Including this | • The human • | | 1089 1090 1091 | | | 1093 | Institutional Review Board (IRB) Approvals or Equivalent for Research with Human Subjects | | | 15. | 1092 | | | 1094 1095 | Question: Does the paper describe potential risks incurred by study participants, whether such risks were disclosed to the subjects, and whether Institutional Review Board (IRB) | | | | | | | 1096 1097 | approvals (or an equivalent approval/review based on the requirements of your country or institution) were obtained? | | | | | | | 1098 | Answer: [NA] | | | | | | | 1099 | Justification: Not relevant to this paper. Guidelines: | | | | | | | 1100 | • The answer NA means that the paper does not involve crowdsourcing nor research with human subjects. | | | | | | | 1101 1102 | • Depending on the country in which research is conducted, IRB approval (or equivalent) may be required for any human subjects research. If you obtained IRB approval, you | | | | 1103 1104 | | | 1105 | should clearly state this in the paper. • We recognize that the procedures for this may vary significantly between institutions and locations, and we expect authors to adhere to the NeurIPS Code of Ethics and | | | | | | | | guidelines for their institution. | | | | | | | | • For initial submissions, do not include any information that would break anonymity applicable), such as the institution conducting the review. | | | | | | | | (if | | | | | | | 1110 | | | | | | | | 1107 | | | | | | | | 1108 | | | | | | | | | the | | | | | | | | 1109 | | | | | | | | | | | | | 1106 |
ztwl4ubnXV
OxonFair: A Flexible Toolkit for Algorithmic Fairness
We present OxonFair, a new open source toolkit for enforcing fairness in binary classification. Compared to existing toolkits: (i) We support NLP and Computer Vision classification as well as standard tabular problems. (ii) We support enforcing fairness on validation data, making us robust to a wide range of overfitting challenges. (iii) Our approach can optimize any measure based on True Positives, False Positive, False Negatives, and True Negatives. This makes it easily extensible and much more expressive than existing toolkits. It supports all 9 and all 10 of the decision-based group metrics of two popular review articles. (iv) We jointly optimize a performance objective alongside fairness constraints. This minimizes degradation while enforcing fairness, and even improves the performance of inadequately tuned unfair baselines. OxonFair is compatible with standard ML toolkits, including sklearn, Autogluon, and PyTorch and is available at https://github.com/oxfordinternetinstitute/oxonfair.
https://openreview.net/pdf/1198c251b0f5664b73f1ec30b356982f81f81fc7.pdf
[ { "confidence": 3, "rating": 7, "review_id": "eHIhFf9cWw", "review_text": "The paper introduces \"AnonFair,\" a toolkit designed to enforce algorithmic fairness across various domains, including NLP, computer vision, and traditional tabular data. It is compatible with popular machine learning frameworks like sklearn, AutoGluon, and PyTorch. Unlike well-established fairness tools like FairLearn and AIF360, AnonFair extends to different types of data, including NLP and vision.\n\nOther tools offer many methods but limited control over them, while AnonFair uses a single, highly customizable method that allows for per-group thresholding.\n\nIt specifically addresses the issue of overfitting by utilizing validation data, making it more reliable when traditional methods might fail.\n\nEmpirical evidence presented shows that AnonFair performs well, often matching or surpassing other methods in fairness benchmarks without being specifically optimized for complex or high-dimensional scenarios.\n\nAnonFair seems to provide a robust and adaptable solution for implementing fairness in machine learning, in ways that other tools do not currently offer.\n\n- The paper does well in positioning AnonFair against competing tools by demonstrating its performance on standard fairness metrics and its versatility across a variety of use cases.\n- AnonFair supports NLP and computer vision classification tasks, allowing broader applicability.\n- The toolkit uses validation data to combat overfitting, ensuring that fairness measures remain robust across both training and unseen data.\n\n- The toolkit not only competes well in terms of accuracy and fairness metrics but also offers significant advantages in computational efficiency.\n\n- Some sections are overly detailed, such as the introduction, while others are missing necessary depth:\n - Section 3 could use a clearer structure, possibly with a diagram, to help readers understand how to interact with the toolkit.\n - The section on toolkit expressiveness needs more detailed examples and explanations of how the supported fairness measures are implemented. \n - Results discussion is kept very brief and could benefit from specific numerical examples, like percentage improvements compared to other methods.m actual numbers, such as how much % improvement in comparison to method XY and such.\n\n- The paper assumes readers are familiar with fairness terminology and metrics without adequate explanations or definitions for some acronyms (e.g., DEO in Table 3 and 4).\n - Subsection 4.3 lists supported fairness measures but fails to provide examples or brief explanations, making it less informative for those not familiar with these terms.\n\n- Lack of consistency in terminology usage; for example, \"EOp\" in Figure 1 (top right) vs. \"EO\" in Section 5.2, “AnonFair” missing before \"Frontier\" in Figure 1 (left), and inconsistent references like \"See Figure\" vs. \"See fig..\"\n\n- A stronger call to action for community engagement, such as through open-source collaboration or empirical validation studies, could significantly enhance the broader impact and encourage more widespread adoption and refinement of AnonFair.\n\n- The paper would benefit from a summary of explicit cases and recommendations advising users on the best scenarios for using the tool.\n\n- Figure 2 is not referred to in the paper, or did I miss this part.\n\n1. The paper mentions that hard assignment is more efficient than soft assignment, while appendix A adds some operational details, it remains unclear how these methods specifically compare in terms of quantitative metrics. Could the authors provide specific metrics or comparisons that demonstrate the efficiency and performance benefits of hard assignment?\n2. The discussed limitations reads a bit out of context given provided evidence in the paper. What makes the mentioned solutions suboptimal, and how significant are these shortcomings? Also it was not clear to me, after finishing reading, when it is adequate to use this tool and what could be use cases when it fails. Including this into the conclusion could make the reader grasping the full picture. \n3. Is Figure 6 part of the Appendix or misplaced?" }, { "confidence": 1, "rating": 7, "review_id": "suISL3caiH", "review_text": "This paper describes a new toolkit for algorithmic fairness, enabling the optimization of any fairness measure that is a function of the confusion matrix. Experiments on vision and NLP demonstrated the effectiveness of the proposed toolkit.\n\nAn easy-to-use toolkit for enforcing algorithmic fairness.\n\nPresentation could be made more self-contained, e.g. a table listing the supported fairness metrics, as functions of the confusion matrix. This would help readers not familiar with the field.\n\nIt seems that only binary classification is supported. How can such metrics be extended to other tasks?\n\nSome minimal code snippets for the interface could be shown as examples.\n\n- L5: \"True positives, false positives, ...\" => \"the confusion matrix\"\n - L6: \"extendable\" => \"extensible\"" }, { "confidence": 4, "rating": 6, "review_id": "wfWExSdRPF", "review_text": "The paper introduces a new toolkit designed to enhance algorithmic fairness with greater expressiveness. Unlike existing toolkits, this one offers more customization options to optimize user-defined objectives and fairness constraints. Although the proposed toolkit currently includes only one method, it supports both computer vision and natural language processing (NLP) tasks. The authors compare the efficiency of this method, finding that the toolkit is relatively more efficient than Fairlearn. Comprehensive experiments were conducted on various datasets, and the results were compared with those from other popular toolkits.\n\n- The paper introduces a versatile toolkit that supports both NLP and computer vision tasks, unlike existing toolkits which lack this capability.\n- The proposed toolkit employs efficient optimization techniques that accelerate the evaluation process.\n\n- The formulation presented in Subsection 4.2 of the paper is limited to a single-layer model, which restricts its applicability across different machine learning models. To enhance the flexibility of the method, I recommend adopting a more generic notation, particularly if we aim to incorporate pretrained language models.\n- The abstract is quite unclear, especially the part that mentions \"9/9 and 10/10 of the group metrics of two popular review papers.\" I suggest rephrasing the abstract for better clarity and comprehension.\n\n- In Figure 3, the proposed toolkit appears to encounter scaling issues when reaching 5 groups. Could you provide more details on why this occurs and elaborate on the underlying reasons for this limitation?\n- The paper presents results on multilingual datasets. Do you have any specific findings for each language, particularly regarding the effectiveness of the toolkit for individual languages?" }, { "confidence": 4, "rating": 4, "review_id": "Fzy3CesDFd", "review_text": "The paper describes details of a fairness toolkit (\"AnonFair\"), which confers fairness to any given machine learning classifier by exploring a wide range of prediction thresholds for different groups (which are either provided upfront or inferred through an auxiliary classifier). The toolkit is designed to be quite expressive, as it can optimize several different metrics, e.g., false positives/negatives, true positives, etc. The toolkit can work across all classifiers (which can output class probabilities), including ones trained on vision and NLP tasks.\n\nThe paper introduces and describes a toolkit that implements several fairness strategies and can support any fairness measure that can be expressed in terms of true positives, false positives, true negatives and false negatives. These techniques primarily rest upon adjusting the classification thresholds of different groups, and the paper also incorporates tricks to speed up their computations of precision and recall across different thresholds. The fairness techniques that this paper implements are (largely) classifier agnostic, and can be applied to a wide range of classifiers including NLP and vision classifiers (as this paper shows). Overall, I appreciate that expressivity and broad applicability of their toolkit.\n\nWhile the toolkit might turn out to be useful for some practitioners, it is a relatively straightforward implementation of well-known (and simple) technique of adjusting prediction thresholds across groups. Exploring different thresholds can be computationally prohibitive, for which the authors use a standard trick to speed up their explorations (which I appreciate). The paper acknowledges and cites relevant papers/techniques that they implement. Overall, the originality and novelty of their work is significantly limited, as the toolkit is an implementation of known and simple fairness techniques. Further, the underlying fairness techniques (not from the authors) are themselves applicable to most classifiers, so any implementation of the same could work for NLP and vision tasks—which is claimed to be one of the major contributions of this work.\n\nI feel that the current version is a good starting point (in terms of implementation) of existing fairness techniques and speeding them up and trying them out on vision and NLP tasks. To improve the paper, I would suggest clearly outlining the important problems that this toolkit now can enable researchers to answer (which was not possible before) and answer a few of those questions in the paper." }, { "confidence": 3, "rating": 6, "review_id": "r0Qce3R7mX", "review_text": "This paper presents AnonFair, a cutting-edge open-source toolkit designed to promote algorithmic fairness. Authors claim the following contributions:\n(1) Comprehensive support for NLP and Computer Vision classification, as well as standard tabular problems.\n(2) Enhanced robustness against overfitting challenges through the ability to enforce fairness on validation data.\n(3) Versatility in optimizing any measure that is a function of True Positives, False Positives, False Negatives, and True Negatives, making it easily adaptable and more expressive than other toolkits.\n(4) Seamless integration with popular ML toolkits such as sklearn, Autogluon, and pytorch.\n(5) AnonFair supports 9/9 and 10/10 of the group metrics of two prominent review papers and is accessible online at no cost.\n\nThis toolkit progresses in algorithmic fairness and enhances multidisciplinary collaborations, it is design to integrate the intervention of policy-makers.\n\nThe paper includes a complete section of experiments and comparison with existing toolkits. \n\nAnonFair key contributions include support to popular and relevant NLP and Computer vision areas.\n\n* Lack of clarity in some reported experiments, e.g. results tables are not cited in the text, metrics are not well-contextualized (e.g. larger or lower scores are better?)\n\n* Lack of analysis, examples or human evaluation to better understand contributions and limitations of the method in each of the experiments.\n\n(1) Could you provide more high-level context for each of the experiments that you are running in order to make the paper more self-contained?\n(2) for NLP experiments, why do you think mitigation works for Twitter and not for Jigsaw?" } ]
## OxonFair: A Flexible Toolkit for Algorithmic Fairness Eoin Delaney Zihao Fu University of Oxford University of Oxford Sandra Wachter University of Oxford Brent Mittelstadt University of Oxford [email protected] Chris Russell University of Oxford ## Abstract We present OxonFair, a new open source toolkit for enforcing fairness in binary classification. Compared to existing toolkits: (i) We support NLP and Computer Vision classification as well as standard tabular problems. (ii) We support enforcing fairness on validation data, making us robust to a wide range of overfitting challenges. (iii) Our approach can optimize any measure based on True Positives, False Positive, False Negatives, and True Negatives. This makes it easily extensible and much more expressive than existing toolkits. It supports all 9 and all 10 of the decision-based group metrics of two popular review articles. (iv) We jointly optimize a performance objective alongside fairness constraints. This minimizes degradation while enforcing fairness, and even improves the performance of inadequately tuned unfair baselines. OxonFair is compatible with standard ML toolkits, including sklearn, Autogluon, and PyTorch and is available at https://github.com/oxfordinternetinstitute/oxonfair . ## 1 Introduction The deployment of machine learning systems that make decisions about people offers an opportunity to create systems that work for everyone. However, such systems can lock in existing prejudices. Limited data for underrepresented groups can result in ML systems that do not work for them, while the use of training labels based on historical data can result in ML systems copying previous biases. As such, it is unsurprising that AI systems have repeatedly exhibited unwanted biases towards certain demographic groups in a wide range of domains including medicine [1, 2], finance [3, 4], and policing [5]. Such groups are typically identified with respect to legally protected attributes, such as ethnicity or gender [6, 7, 3]. The field of algorithmic fairness has sprung up in response to these biases. Contributions to algorithmic fairness can broadly be split into methodological and policy-based approaches. While much methodological work focuses on measuring and enforcing (un)fairness, a common criticism from the policy side is that this work can occur 'in isolation from policy and civil societal contexts and lacks serious engagement with philosophical, political, legal and economic theories of equality and distributive justice' [8]. In response to these criticisms, we have developed OxonFair, a more expressive toolkit for algorithmic fairness. We acknowledge that people designing algorithms are not always the right people to decide on policy, and as such we have chosen to create as flexible a toolkit as possible to allow policymakers and data scientists with domain knowledge to identify relevant harms and directly alter the system behaviour to address them. Unlike existing Fairness toolkits such as AIF360 [9], which take a method-driven approach, and provide access to a wide range of methods but with limited control over their behaviour, we take a measure-based approach and provide one fairness method that is extremely customizable, and can optimize user-provided objectives and group fairness constraints. Figure 1: Left: The need for an objective when enforcing fairness. We evaluate a range of methods with respect to balanced accuracy and demographic parity (OxonFair generates a frontier of solutions). Only OxonFair and RejectOptimization optimize balanced accuracy. As we improve the balanced accuracy of fair methods by adjusting classification thresholds (gray lines) fairness deteriorates. To avoid this, we jointly optimize a fairness measure and an objective. For more examples, see Figure 6. Right Top: Using validation data in fairness. We compare against Fairlearn using standard algorithms with default parameters. These methods perfectly overfit and show no unfairness with respect to equal opportunity on the trainset, but substantial unfairness on test. OxonFair enforces fairness on held-out validation data and is less prone to overfitting. Right Bottom: A comparison of toolkits. AIF360 offers a large range of tabular methods, most of which do not allow fairness metric selection, Fairlearn offers fewer but more customizable tabular methods. OxonFair offers one method that can be applied to text, image, and tabular data, while supporting more notions of fairness and objectives. <!-- image --> | Classifier (dataset) | Partition | Fairlearn ) | Fairlearn ) | OxonFair | OxonFair | |---------------------------------|----------------|---------------|---------------|------------|------------| | | | Acc ( ↑ | ) DEO ( ↓ | Acc ( ↑ ) | DEO ( ↓ ) | | Decision Tree (adult) | Train/Val Test | 100% 81% | 0% 8.8% | 82% 81% | 2.0% 1.1% | | Random Forest (adult) | Train/Val Test | 100% 86% | 0% 7.5% | 86% 86% | 1.6% 3.3% | | XGBoost (myocardial infarction) | Train/Val Test | 100% 89% | 0% 11.8% | 90% 87% | 0.6% 2.9% | | Criterion | AIF360 | Fairlearn | OxonFair | |------------------------------------------|----------|-------------|------------| | Number of methods | 10+ | 5 | 1 | | Adjustible Fairness Criteria | × | ✓ | ✓ | | Supports 3+ Groups | × | ✓ | ✓ | | Fairness definitions enforced per method | < 4 | 5 | 14+ | | Methods needing groups at eval | Some | 1 | No | | Supports Utility Functions | × | × | ✓ | | Supports Tabular Data | ✓ | ✓ | ✓ | | Supports Computer Vision | × | × | ✓ | | Supports NLP | × | × | ✓ | To do this, we focus on one of the oldest and simplest approaches to group fairness: per-group thresholding [10, 11, 3], which is known to be optimal for certain metrics under a range of assumptions [3, 12, 13]. Our contribution is to make this as expressive as possible while retaining speed, for the relatively low number of groups common in algorithmic fairness. Inherently, any approach that allows a sufficiently wide set of objectives, and sets per-group thresholds will be exponential with respect to the number of groups, but we use a standard trick, widely used in the computation of measures such as mean absolute precision (mAP) to make this search as efficient as possible. Accepting the exponential complexity allows us to solve a much wider range of objectives than other toolkits, including maximizing F1 or balanced accuracy (see Figure 1 left), minimizing difference in precision [14], and guaranteeing that the recall is above k% for every group [8]. Where groups are unavailable at test time, we simply use a secondary classifier to estimate group memberships [15, 16] and set different thresholds per inferred group to enforce fairness with respect to the true groups. Thresholding can be applied to most pretrained ML algorithms, and optimal thresholds can be selected using held-out validation data unused in training. This is vital for tasks involving deep networks such as NLP and computer vision, where the training error often goes to zero, and fairness methods that balance error rates between groups cannot generalize from constraints enforced on overfitting training data to previously unseen test data [17]. While overfitting is unavoidable in vision and NLP tasks, it is still a concern on tabular data. Figure 1 Top-Right, shows examples of decision trees, random forests [18] and XGBoost [19] trained with default parameters and obtaining 0 training error on standard datasets. This causes the Fairlearn reductions method [20] to fail to enforce fairness. NLP and vision are so challenging that two popular toolkits Fairlearn and AIF360 do not attempt to work in these domains. In contrast, we target them, making use of a recent work [21] that showed how fair classifiers based on inferred group thresholds can be compressed into a single network. ## 2 Related Work Bias mitigation strategies for classification have been broadly categorized into three categories [6, 22-24]; pre-processing, in-processing and post-processing. Pre-processing algorithms improve fairness by altering the dataset in an attempt to remove biases such as disparate impact [11] before training a model. Popular preprocessing approaches include simply reweighting samples in the training data to enhance fairness [25], optimizing this process by learning probabilistic transformations [26], or by generating synthetic data [27-29]. In-processing / In-training methods mitigate bias by adjusting the training procedure. Augmenting the loss with fair regularizers [23, 30] is common for logistic regression and neural networks. Agarwal et al. [31] iteratively alter the cost for different datapoints to enforce fairness on the train set. Approaches based on adversarial training typically learn an embedding that reduces an adversary's ability to recover protected groups whilst maximizing predictive performance [3235]. Other popular approaches include Disentanglement [36, 37], Domain Generalization [38-40], Domain-Independence [41] and simple approaches such as up-sampling or reweighing minority groups during training. Notably, in the case of high-capacity models in medical computer-vision tasks, a recent benchmark paper by Zong et al. [42] showed state-of-the-art in-processing methods do not significantly improve outcomes over training without consideration of fairness. A comprehensive benchmark study of in-processing methods in other domains is provided by Han et al. [43]. Post-processing methods enforce fairness by using thresholds and randomization to adjust the predictions of a trained model based on the protected attributes [3, 44]. Post-processing methods are typically model-agnostic and can be applied to any model that returns confidence scores. Enforcing Fairness on Validation Data avoids the misestimation of error rates due to overfitting. It has shown particular promise in computer vision through Neural Architecture Search [45], adjusting decision boundaries [30], reweighting [46] and data augmentation [17]. ## 2.1 Fairness Toolkits Most toolkits such as Fairness Measures [47], TensorFlow Fairness Indicators [48], and FAT Forensics [49] focus on measuring bias and do not support enforcing fairness through bias mitigation. FairML [50] audits fairness by quantifying the importance of different attributes in prediction. This is best suited for tabular data where features are well-defined. FairTest [51] investigates the associations between application outcomes (e.g., insurance premiums) and sensitive attributes such as age to highlight and debug bias in deployed systems. Aequitas [52] provides examples of when different measures are (in)appropriate with support for some bias mitigation methods in binary classification. Themis-ML [53] supports the deployment of several simple bias mitigation methods such as relabelling [25], but focuses on linear models. Friedler et al. [22] introduce the more complete Fairness Comparison toolkit where four bias mitigation strategies are compared across five tabular datasets and multiple models (Decision trees, Gaussian Naïve Bayes, SVM, and Logistic Regression). There are two fairness toolkits that support sklearn [18] like OxonFair. These are the two most popular toolkits: Microsoft Fairlearn [20] (1.9k GitHub Stargazers as of June 2024) and IBM AIF360 [9] (2.4k Stargazers). AIF360 offers a diverse selection of bias measures and pre-processing, inprocessing and post-processing bias mitigation strategies on binary classification tabular datasets. For mitigation, Fairlearn primarily offers implementations of [31, 3] avoiding the use of the term bias , instead considering fairness through the lens of fairness-related harms [54] where the goal is to 'help practitioners assess fairness-related harms, review the impacts of different mitigation strategies and make trade-offs appropriate to their scenario' . Lee &amp; Singh [55] recognized Fairlearn as one of the most user-friendly fairness toolkits, and critiqued AIF360 as being the least user-friendly toolkit. Both AIF360 and Fairlearn contain post-processing methods that select per-group thresholds. Unlike OxonFair, neither method uses the fast optimization we propose; both methods require group information at test time; AIF360 only supports two groups, but does use cross-validation to avoid overfitting; Fairlearn does not support the use of validation data, but does support more than two groups. According to their documentation, neither toolkit can be applied to NLP or computer vision. Specialist solvers Fairret [56] is a recent PyTorch library shown to enforce fairness on tabular data. As PyTorch is a focus of OxonFair (see Section 4.2), we compare with Fairret in Appendix D.1. Cruz and Hardt [57] proposed an efficient LP-based formulation for post-processing Equalized Odds. It supports randomized thresholds, and dominates fairness methods such as OxonFair that use only one threshold per group. In Appendix D.2 we show how OxonFair can be extended to support randomized Figure 2: Left: Summary of the fast path algorithm for inferred attributes (Section 4.1). Groups are noisily estimated using a classifier. Within each estimated group, we cumulatively sum positive and negative samples that truly belong to each group. For each pair of thresholds, we select relevant sums from the inferred group and combine them. See Appendix A.1. Center: Combining two heads (original classifier and group predictor) to create a fair classifier. See Section 4.2. Right: The output of a second head predicting the protected attribute in CelebA. The pronounced bimodal distribution makes the weighted sum of the two heads a close replacement for per-group thresholds. <!-- image --> thresholds, alongside a determenistic variant and another using inferred group membership that are not supported by [57]. ## 3 Toolkit interface The interface of OxonFair decomposes into three parts: (i) evaluation of fairness and performance for generic classifier outputs. (ii) evaluating and enforcing fairness for particular classifiers. (iii) specialist code for evaluating and enforcing fairness for deep networks. Code for the evaluation of classifier outputs takes target labels, classifier outputs, groups, and an optional conditioning factor as input; while code for the evaluation and enforcement of fairness of a particular classifier, are initialized using the classifier, and from then on take datasets (in the form of a pandas dataframe [58], or a dictionary) as input, and automatically extracts these factors from them. The evaluation code provide three functions: evaluate which reports overall performance of the classifier; evaluate\_per\_group which reports performance per group of the classifier; and evaluate\_fairness which reports standard fairness metrics. All methods allow the user to specify which metrics should be reported. We recommend data scientists focus on evaluate\_per\_group which shows direct harms such as poor accuracy, precision, or low selection rate for particular groups. OxonFair provides an interface FairPredictor(classifier, validation\_data, groups) that takes an existing classifier as an input, a validation dataset, and specification of the groups as an input and returns an object which we then enforce fairness on by calling .fit(objective, constraint, value) . Internally, the method explores a wide range of possible thresholds for each group, membership of which is assumed to be either known or inferred by an auxiliary classifier. The resulting FairPredictor has evaluation methods as described above. When called without arguments, they report both the performance of the original and the updated fair classifier on the validation data. In addition, FairPredictor provides methods predict and predict\_proba which make fair predictions and return scores corresponding to the left-hand side of Equation (1). Calling fit optimizes the objective - typically a relevant performance criteria such as accuracy, subject to the requirement that the constraint is either greater or less than the value. If the objective should be minimized or maximized is inferred automatically, as is the requirement that the constraint is less than or greater than the value, but this default behavior can be user overridden. This is a relatively minimal interface, but one that is surprisingly expressive. By explicitly optimizing an objective, we can not just minimize the degradation of the metric as we enforce fairness, but sometimes also improve performance over the unfair baseline that is not fully optimized with respect to this metric. Even when optimizing for accuracy, this can create situations where it looks like some improvements in fairness can be had for free, although generally this is an artifact of the gap between optimizing log-loss and true accuracy in training. By formulating the problem as a generic constrained optimization, and not requiring the constraint to be a typical fairness constraint, we leave it open for enforcing a much broader space of possible objectives. This can be seen in Appendix C, where we show how to enforce minimax fairness [59], maximize utility [60] combined with global recall constraints, and demonstrate levelling-up [8] by specifying minimum acceptable harm thresholds. Under the hood, a call to fit generates a Pareto frontier 1 and selects the solution that best optimizes the objective while satisfying the constraint. The frontier can be visualized with plot\_frontier . ## 4 Inference To make decisions, we assign thresholds to groups. We write f ( x ) for the response of a classifier f , on datapoint x t , for the vector corresponding to the ordered set of thresholds, and G x ( ) for the one-hot encoding of group membership. We make a positive decision if <!-- formula-not-decoded --> To optimize arbitrary measures we perform a grid search over the choices of threshold, t . Efficient grid sampling We make use of a common trick for efficiently computing measures such as precision and recall for a range of thresholds. This trick is widely used without discussion for efficient computation of the area under ROC curves, and we have had trouble tracking down an original reference for it. As one example, it is used by scikit-learn [18]. The trick is as follows: sort the datapoints by classifier response, then generate a cumulative sum of the number of positive datapoints and the number of negatives, going from greatest response to least. When picking a threshold between points i and i +1 , TP is given by the cumulative sum of positives in the decreasing direction up to and including i ; FN is the sum of negatives in the same direction; FP is the total sum of positives minus TP, and TN is the total sum of negatives minus TN. We perform this trick per group, and efficiently extract the TP, FN, FP and TN for different thresholds. These are combinatorially combined across the groups and the measures computed. This two stage decoupling offers a substantial speed-up. If we write T for the number of thresholds, k for the number of groups, and n for the total number of datapoints, our procedure is upper-bounded by O T ( k + n log n ) , while the naïve approach is O nT ( k ) . No other fairness method makes use of this, and in particular, all the threshold-based methods offered by AIF360 make use of a naïve grid search. From the grid sampling, we extract a Pareto frontier with respect to the two measures 2 . The thresholds that best optimize the objective while satisfying the constraint are returned as a solution. If no such threshold exists, we return the thresholds closest to satisfying the constraint. ## 4.1 Inferred characteristics When using inferred characteristics, we offer two pathways for handling estimated group membership. The first pathway we consider makes a hard assignment of individuals to groups, based on a classifier response. The second pathway explicitly uses the classifier confidence as part of a per-datapoint threshold. In practice, we find little difference between the two approaches, but the hard assignment to groups is substantially more efficient and therefore allows for a finer grid search and generally better performance. However, the soft assignment remains useful for the integration of our method with neural networks, where we explicitly merge two heads (a classifier and a group predictor) of a neural network to arrive at a single fair model. For details of the two pathways see Appendix A. ## 4.2 Fairness for Deep Networks We use the method proposed in [21] (N.B., they used it only for demographic parity). Consider a network with two heads f , and g , comprised of single linear layers, and trained to optimize two tasks on a common backbone B . Let f be a standard classifier trained to maximize some notion of performance such as log-loss and g is a classifier trained to minimize the squared loss 3 with respect to 1 A maximal set of solutions such that for every element in the set, any solution with a better score with respect to the objective would have a worse score with respect to the constraint, and vice versa. 2 In practice this is done twice, once in a coarse grid search to determine a good range, and then in a second finer search within the minimum and maximum range found by the first search 3 The squared loss is used rather than log-loss so that the output of g x ( ) remains close to 0 and 1. With log-loss, the output pre-sigmoid is more likely to overwhelm confident decisions made by the original classifier. Figure 3: Left: Results on Compas without using group annotations at test time. Right: Runtime Comparison for Fairlearn Reductions and OxonFair on Adult using a Macbook M2. To alter the groups, we iteratively merge the smallest racial group with 'Other', reducing the search space. For both methods, we enforced demographic parity over a train set consisting of 70% of the data. Despite the exponential complexity of our approach, we remain significantly faster until we reach 5 groups. The 0.6+ indicates the seconds to train XGBoost. OxonFair(S) indicates the runtime of the naive slow pathway described in Appendix A.2 rather than our accelerated approach. <!-- image --> | #Groups | Accuracy ( ↑ ) | Dem. Par. ( ↓ ) | Time ( ↓ ) | |---------------|------------------|-------------------|--------------| | FairLearn 5 | 86.5% | 3.8% | 47.0s | | OxonFair 5 | 86.9% | 1.9% | 0.6+ 42.9s | | OxonFair(S) 5 | - | - | - | | FairLearn 4 | 86.7% | 1.6% | 28.4s | | OxonFair 4 | 86.8% | 1.2% | 0.6+0.79s | | OxonFair(S) 4 | - | - | 0.6+411s | | FairLearn 3 | 86.5% | 0.7% | 25.0s | | OxonFair 3 | 86.8% | 2.1% | 0.6+ 0.07s | | OxonFair(S) 3 | - | - | 0.6+ 22.4s | | FairLearn 2 | 86.9% | 0.2% | 20.0s | | OxonFair 2 | 86.9% | 0.3% | 0.6+0.04s | | OxonFair(S) 2 | - | - | 0.6+1.2s | a vector that is a one hot-encoding of group membership. Any decision f ( x ) -t · g x ( ) ≥ 0 can now be optimized for given criteria by tuning weights w using the process outlined in the slow pathway. As both f and g are linear layers on top of a common nonlinear backbone B , we can write them as: <!-- formula-not-decoded --> note that as f ( x ) is a real number, and g x ( ) is a vector, w f is a vector and b f a real number, while w g is a matrix and b g a vector. This means that the decision function f ( x ) -t · g x ( ) ≥ 0 can be rewritten using the identity: <!-- formula-not-decoded --> <!-- formula-not-decoded --> This gives a 3 stage process for enforcing any of these decision/fairness criteria for deep networks. - 1. Train a multitask neural network as described above. - 2. Compute the optimal thresholds t on held-out validation data as described in Appendix A. - 3. Replace the multitask head with a neuron with weights ( w f -t · w g ) and bias ( b f -t · b g ) . To maximize performance, the training set should be augmented following best practices, while, to ensure fairness, the validation set should not . 4 The resulting network f ∗ will have the same architecture as the original non-multitask network, while satisfying chosen criteria. OxonFair has a distinct interface for deep learning . 5 Training and evaluating NLP and vision frequently involves complex pipelines. To maximize applicability, we assume that the user has trained a two-headed network as described above, and evaluated on a validation set. Our constructor DeepFairPredictor takes as an input: the output of the two-headed network over the validation set; the ground-truth labels; and the groups. fit and the evaluation functionality can then be called in the same way. Once a solution is selected, the method merge\_heads\_pytorch generates the merged head, while extract\_coefficients can be called to extract the thresholds t from 4, when working with a different framework. ## 4.3 Toolkit expressiveness Out of the box, OxonFair supports all 9 of the decision-based group fairness measures defined by Verma and Rubin [61] and all 10 of the fairness measures from Sagemaker Clarify [62]. OxonFair 4 In some situations, a credible case can be made for including the mildest forms of augmentation, such as left-right flipping in the validation set, to maximize the validation set size. 5 We provide example notebooks for practitioners to get started with DeepFairPredictor in our toolkit. Table 1: We report mean scores over the 14 gender independent CelebA labels [28]. Single task methods and FairMixup scores in the second and third blocks are from Zietlow et al. [17]. ERM is the baseline architecture run without fairness. OxonFair (optimizing for accuracy and difference in equal opportunity (DEO)), has better accuracy ( ↑ ) and DEO ( ↓ ) scores than any other fair method. | | ERM | Uniconf. | Domain | Domain | OxonFair | ERM | Debiasing | Regularized 77] | g-SMOTE [17] | g-SMOTE | ERM | FairMixup | |------|-----------|------------|----------------|-----------|------------|-------------|-------------|-------------------|----------------|-----------|---------|-------------| | | multitask | Adv.[74] | Disc. [75, 41] | Ind. [41] | DEO | single task | GAN [28] | [76, | Adaptive | [17] | [78] | [78] | | Acc. | 93 . 07 | 92 . 71 | 92 . 96 | 92 . 63 | 92 . 75 | 92 . 47 | 92 . 12 | 91 . 05 | 92 . 56 | 92 . 64 | 92 . 74 | 88 . 46 | | DEO | 16 . 47 | 19 . 63 | 14 . 61 | 7 . 78 | 3 . 21 | 12 . 54 | 9 . 11 | 3 . 77 | 14 . 28 | 15 . 11 | 7 . 97 | 3 . 58 | supports any fairness measure (including conditional fairness measures) that can be expressed per group as a weighted sum of True Positives, False Positives, True Negatives and False Negatives. OxonFair does not support notions of individual fairness such as fairness through awareness [63] or counterfactual fairness [64, 65]. See Appendix B for a discussion of how metrics are implemented and comparison with two review papers. Appendix C contains details of non-standard fairness metrics, including utility optimization [60]; minimax fairness [59, 66, 67]; minimum rate constraints [8], and Conditional Demographic Parity [68]. This also includes a variant of Bias Amplification [69, 70]. ## 5 Experimental Analysis For tabular data, we compare with all group fairness methods offered by AIF360, and the reductions approach of Fairlearn. OxonFair is compatible with any learner with an implementation of the method predict\_proba consistent with scikit-learn [18] including AutoGluon [71] and XGBoost [19]. A comparison with Fairlearn and the group methods from AIF360 on the adult dataset can be seen in Figures 1 and 6 using random forests. This follows the setup of [9]: we enforce fairness with respect to race and binarize the attribute to white vs everyone else (this is required to compare with AIF360), 50% train data, 20% validation, and 30% test, and a minimum leaf size of 20. With this large leaf size, all errors on train, validation, and test are broadly comparable, but our approach of directly optimizing an objective and a fairness measure leads us to outperform others. Figure 1 top right shows the importance of being able to use a validation set to balance errors. Using sklearn's default parameters we overfit to adult, and as the classifier is perfect on the training set, all fairness metrics that match error rates are trivially satisfied [72, 17]. The same behavior can be observed using XGBoost on the medical dataset [73] when enforcing equal opportunity with respect to sex 6 . In general, tabular methods need not overfit, and tuning parameters carefully can allow users to get relatively good performance while maintaining error rates between training and test. Figure 3 left shows Equal Opportunity on the COMPAS dataset. To show that OxonFair can also work in low-data regimes where we have insufficient data for validation, we enforce fairness on the training set. As before, we binarize race to allow the use of AIF360. We drop race from the training data, and use inferred protected attributes to enforce fairness. Here OxonFair generates a frontier that is comparable or better than results from existing toolkits, and OxonFair+ (see Section A), further improves on these results. See Figure 3 right for a comparison with Fairlearn varying the groups. ## 5.1 Computer Vision and CelebA CelebA [79]: We use the standard aligned &amp; cropped partitions frequently used in fairness evaluation [17, 21, 28, 41]. Following Ramaswamy et al. [28], we consider the 26 gender-independent , genderdependent and inconsistently labelled attributes as the target attributes for our evaluations (see Table 12 for details). Male is treated as the protected attribute. Implementation Details We follow Wang et al.'s setup [41]. We use a Resnet-50 backbone [80] trained on ImageNet [81]. A multitask classification model is trained, replacing the final fullyconnected layer of the backbone with a separate fully-connected head that performs binary prediction for all attributes. Dropout [82] (p = 0.5) is applied. All models are trained with a batch size of 6 This dataset is carefully curated and balanced. To induce unfairness we altered the sampling and dropped half the people recorded as male and that did not have medical complications across the entire dataset. Table 2: A comparison against standard vision approaches on the more challenging CelebA attributes. OxonFair continues to work well here. All methods share a common backbone and training process. An extended version of this table that considers minimax fairness can be found in Table 14. | | ERM | Uniconf. Adv [74] | Domain Disc. [41] | Domain Ind. [41] | OxonFair DEO | |------------------------------------|------------------------------------|------------------------------------|------------------------------------|------------------------------------|------------------------------------| | Gender-Dependent Attributes | Gender-Dependent Attributes | Gender-Dependent Attributes | Gender-Dependent Attributes | Gender-Dependent Attributes | Gender-Dependent Attributes | | Acc. ( ↑ ) | 86.7 | 86.1 | 86.6 | 85.6 | 85.8 | | DEO ( ↓ ) | 26.4 | 25.0 | 21.9 | 6.50 | 3.92 | | Inconsistently Labelled Attributes | Inconsistently Labelled Attributes | Inconsistently Labelled Attributes | Inconsistently Labelled Attributes | Inconsistently Labelled Attributes | Inconsistently Labelled Attributes | | Acc. ( ↑ ) | 83.0 | 82.5 | 83.1 | 82.3 | 82.1 | | DEO ( ↓ ) | 21.9 | 29.1 | 25.3 | 17.2 | 2.36 | Figure 4: Left: The Pareto frontier of min. group recall vs. accuracy on Blond Hair demonstrates OxonFair's superior performance. Right: Comparing accuracy of fairness methods on 26 CelebA attributes while varying global decision thresholds to increase the minimum group recall level to δ . <!-- image --> | CelebA | δ = 0.50 | δ = 0.75 | δ = 0.90 | |-------------|------------|------------|------------| | Baseline | 89 | 84.5 | 77.6 | | Adversarial | 87.8 | 82.4 | 75.2 | | Domain-Dep | 82.3 | 76.8 | 68.6 | | Domain-Ind | 89.2 | 86.2 | 79.8 | | OxonFair | 89.9 | 87.3 | 81.8 | 32 is and using Adam [83] (learning rate 1e-4). We train for 20 epochs and select the model with highest validation accuracy. Images are center-cropped and resized to 224 × 224 . During training, we randomly crop and horizontally flip the images. See Appendix E. Results: Table 1 and 2 demonstrates that using OxonFair as described in Section 4.2 generates fairer and more accurate solutions on unseen test data than other fair methods. Simple approaches such as Domain Independent training were more effective than adversarial training for enforcing fairness confirming [41, 43]. Occasionally, OxonFair finds solutions on the Pareto frontier that are both fairer and more accurate than the unconstrained classifier (See Figure 5). Figure 4 shows a novel fairness evaluation motivated by medical use cases [8, 42] where practitioners might want to correctly identify at least δ % of positive cases in each group. We evaluate how accuracy changes if we guarantee that the minimum recall is above δ % for every group. For OxonFair, we call .fit(accuracy, recall.min, δ ) . For other methods, we vary a global offset to ensure that the minimum recall is at least δ . ## 5.2 NLP and Toxic Content We conducted experiments on hate speech detection and toxicity classification using two datasets: the multilingual Twitter corpus [84] and Jigsaw [85]. Experiments were performed across five languages (English, Polish, Spanish, Portuguese, and Italian) and five demographic factors (age, country, gender, race/ethnicity, and religion) were treated as the protected groups. For details, see Appendix F.1. We compare OxonFair with the following approaches. Base reports results of the standard BERT model [86]. CDA (Counterfactual Data Augmentation) [29, 87-90] rebalances a corpus by swapping bias attribute words (e.g., he/she) in a dataset based on a given dictionary. DP (Demographic Parity) uses regularization [23, 43] to enforce DP. EO (Equal Opportunity [3]) uses the regularization of [23, 43] to enforce EO. Dropout [88, 90] is used as a regularization technique [82] for bias mitigation and improving small group generalization. Rebalance [11, 91] method resamples the minor groups to the same sample size as other groups to mitigate bias. We report scores on Oxonfair optimized for Figure 5: The Pareto frontier on test data when enforcing two fairness measures (DEO and Min Group Min Label Acc; see Appendix C.1) for the Earrings attribute. Inspecting the Pareto frontier shows a wide range of solutions, including some that improve fairness while retaining similar accuracy. <!-- image --> Table 3: Multilingual Twitter dataset: Gender (2 groups: Male, Female). Table 4: Jigsaw dataset: Religion (3 groups: Christian, Muslim, Other). | | F1 ( ↑ ) | Balanced Acc. ( ↑ ) | Acc. ( ↑ ) | DEO ( ↓ ) | |------------------|------------|-----------------------|--------------|-------------| | Base | 40.8 | 63.2 | 89.8 | 21.4 | | CDA [29] | 43.2 | 64.4 | 89.8 | 16 | | DP [23] | 37.2 | 61.7 | 89.5 | 17.9 | | EO [3] | 32 | 59.6 | 89.1 | 13.2 | | Dropout [88] | 32.2 | 59.8 | 88.9 | 13.8 | | Rebalance [11] | 38.2 | 62.1 | 89.5 | 19.1 | | OxonFair Ac. | 34.1 | 60.7 | 88.5 | 8.45 | | OxonFair F1 | 44.6 | 69.1 | 84.7 | 2.1 | | OxonFair B. Ac. | 47.1 | 71.2 | 84.8 | 7.33 | | OxonFair* 7 Ac. | 40 | 63.3 | 89 | 5.99 | | OxonFair* F1 | 47.5 | 70.9 | 85.5 | 5.59 | | OxonFair* B. Ac. | 40.8 | 64.6 | 87.3 | 13 | | | F1 ( ↑ ) | Balanced Acc. ( ↑ ) | Acc. ( ↑ ) | DEO ( ↓ ) | |------------------|------------|-----------------------|--------------|-------------| | Base | 42.1 | 74.8 | 75 | 7.33 | | CDA [29] | 40.4 | 73.8 | 73 | 8.98 | | DP [23] | 44.5 | 69.2 | 85.5 | 3.68 | | EO [3] | 41.1 | 68.8 | 82.2 | 4.6 | | Dropout [88] | 42.7 | 74.1 | 77 | 7.94 | | Rebalance [11] | 39.1 | 73.7 | 70.3 | 9.67 | | OxonFair Ac. | 33.7 | 60.5 | 89.2 | 2.36 | | OxonFair F1 | 44.4 | 69.5 | 85 | 3.79 | | OxonFair B. Ac. | 42.2 | 74.2 | 76.1 | 4.78 | | OxonFair* Ac. | 26.3 | 57.6 | 88.8 | 1.84 | | OxonFair* F1 | 44.3 | 68.5 | 86.2 | 0.84 | | OxonFair* B. Ac. | 41.9 | 73.7 | 76.3 | 4.56 | different metrics: Accuracy; F1 Score; and Balanced Accuracy, and always enforce that the DEO is under 5% on the validation set. Results: Results are shown in Tables 3 and 4. Our observations indicate that: 1) all debiasing methods improve the equal opportunity score and help mitigate bias on Twitter, but not on jigsaw. 2) our toolkit consistently reduces the difference in equal opportunity more than any other approach; 3) for 4/6 experiments we actually improve the objective over the baseline while enforcing fairness, showing the value in targeting a particular objective. For additional experiments on multilingual and multi-demographic data, and the Jigsaw race data, see Appendix F.2, and Appendix F.3. While improving fairness more than existing approaches, OxonFair performs substantially worse on these NLP datasets than on CelbA. There are two challenges present in these datasets but not in CelbA: (i) it is likely much harder to infer gender or religion from short text data than it is to infer gender from a photo of a face. (ii) The limited number of positively labelled datapoints examples that makes estimating DEO, F1,and balanced accuracy unstable (see Tables 16 and 17 for details). To better understand the influence of the two factors we refit OxonFair using the true rather than inferred attributes at test time (bottom block) and see no reliable improvements, suggesting that we are most limited by data scarcity. 7 Here OxonFair indicates the use of inferred group membership, while OxonFair* uses true group membership at test time. ## 6 Conclusion The key contributions of our toolkit lie in being more expressive than other approaches, and supporting NLP and computer vision. Despite this, most of the experiments focus on the standard definitions of Demographic Parity and Equal Opportunity. This is not because we agree that they are the right measures, but because we believe that the best way to show that OxonFair works is to compete with other methods in what they do best. On low-dimensional tabular data, when optimizing accuracy and a standard fairness measure, it is largely comparable with Fairlearn, but if overfitting or non-standard performance criteria and fairness metrics are a concern, then OxonFair has obvious advantages. For NLP, and computer vision, our approach clearly improves on existing state-of-the-art. In no small part, this is due to the observation of [17], that methods for estimating or enforcing error-based fairness metrics on high-capacity models that do not use held-out validation data can not work. We hope that OxonFair will free policy-makers and domain experts to directly specify fairness measures and objectives that are a better match for the harms that they face. In particular, we want to call out the measures in Figure 8 as relevant to medical ML. The question of how much accuracy can we retain, while guaranteeing that classifier sensitivity (AKA recall) is above k% for every group, captures notions of fairness and clinical relevance in a way that standard fairness notions do not [8]. Limitations: We have chosen to optimize as broad a set of formulations as possible. As a result, for certain metrics (particularly equalized odds [3]) the solutions found are known to be suboptimal ; 8 and for others [12] the exponential search is unneeded. Techniques targeting particular formulations may be needed to address this. A key challenge for most fairness approaches is in obtaining the group labels used to measure unfairnesss, and we are no exception. In particular, the gender labels in CelebA and the race and religion labels in our NLP experiments consist of a small number of externally assigned labels that may not match how people self-identify. Improving and measuring fairness with respect to these coarse labels can miss other forms of inequality. Moreover, a major driver of unfairness is a lack of data regarding particular groups. However, this very absence of data makes it hard for any toolkit to detect or rectify unfairness. Broader Impact: OxonFair is a tool for altering the decisions made by ML systems that are frequently trained on biased data. Care must be taken that fair ML is used as a final step after correcting for bias and errors in data collation, and not as a sticking plaster to mask problems [92]. Indeed, inappropriate uses of fairness can lock in biases present in training [72]. Under the hood, OxonFair performs a form of positive discrimination, where we alter scores in response to (perceived) protected characteristics to rectify specific existing inequalities 9 . As such, there are many scenarios where its use may be inappropriate for legal or ethical reasons. ## 7 Acknowledgements This work has been supported through research funding provided by the Wellcome Trust (grant nr 223765/Z/21/Z), Sloan Foundation (grant nr G-2021-16779), Department of Health and Social Care, EPSRC (grant nr EP/Y019393/1), and Luminate Group. Their funding supports the Trustworthiness Auditing for AI project and Governance of Emerging Technologies research programme at the Oxford Internet Institute, University of Oxford. An early prototype version of the same toolkit, for tabular data, was developed while CR was working at AWS and is available online as autogluon.fair ( https://github.com/autogluon/ autogluon-fair/ ). CR is grateful to Nick Erickson and Weisu Yin for code reviews of the prototype. The authors thank Kaivalya Rawal for feedback on the manuscript and testing the codebase. ## References - [1] David Wen, Saad M Khan, Antonio Ji Xu, Hussein Ibrahim, Luke Smith, Jose Caballero, Luis Zepeda, Carlos de Blas Perez, Alastair K Denniston, Xiaoxuan Liu, et al. Characteristics of 8 See Appendix D.2 for ways to address this. 9 See [21] for how other fairness methods may also be doing this. | | 4(1):e64-e74, 2022. | |------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | [2] | Ziad Obermeyer, Brian Powers, Christine Vogeli, and Sendhil Mullainathan. Dissecting racial bias in an algorithm used to manage the health of populations. Science , 366(6464):447-453, 2019. | | [3] | Moritz Hardt, Eric Price, and Nati Srebro. Equality of opportunity in supervised learning. Advances in neural information processing systems , 29, 2016. | | [4] | Emmanuel Martinez and Lauren Kirchner. The secret bias hidden in mortgage-approval algorithms. The Markup , 2021. | | [5] | Julia Angwin, Jeff Larson, Surya Mattu, and Lauren Kirchner. Machine bias. In Ethics of data and analytics , pages 254-264. Auerbach Publications, 2022. | | [6] | Solon Barocas, Moritz Hardt, and Arvind Narayanan. Fairness and machine learning: Limita- tions and opportunities . MIT Press, 2023. | | [7] | Richard Berk, Hoda Heidari, Shahin Jabbari, Michael Kearns, and Aaron Roth. Fairness in criminal justice risk assessments: The state of the art. Sociological Methods & Research , 50(1):3-44, 2021. | | [8] | Brent Mittelstadt, Sandra Wachter, and Chris Russell. The unfairness of fair machine learning: Levelling down and strict egalitarianism by default. arXiv preprint arXiv:2302.02404 , 2023. | | [9] | Rachel KE Bellamy, Kuntal Dey, Michael Hind, Samuel C Hoffman, Stephanie Houde, Kalapriya Kannan, Pranay Lohia, Jacquelyn Martino, Sameep Mehta, Aleksandra Mojsilovic, et al. Ai fairness 360: An extensible toolkit for detecting, understanding, and mitigating unwanted algorithmic bias. arXiv preprint arXiv:1810.01943 , 2018. | | [10] | Faisal Kamiran, Indr˙ e Žliobait˙, e and Toon Calders. Quantifying explainable discrimination and removing illegal discrimination in automated decision making. Knowledge and information systems , 35:613-644, 2013. | | [11] | Michael Feldman, Sorelle A Friedler, John Moeller, Carlos Scheidegger, and Suresh Venkata- subramanian. Certifying and removing disparate impact. In proceedings of the 21th ACM SIGKDD international conference on knowledge discovery and data mining , pages 259-268, 2015. | | [12] | Sam Corbett-Davies, Emma Pierson, Avi Feller, Sharad Goel, and Aziz Huq. Algorithmic decision making and the cost of fairness. In Proceedings of the 23rd acm sigkdd international conference on knowledge discovery and data mining , pages 797-806, 2017. | | [13] | Zachary Lipton, Julian McAuley, and Alexandra Chouldechova. Does mitigating ml's impact disparity require treatment disparity? Advances in neural information processing systems , 31, 2018. | | [14] | Alexandra Chouldechova. Fair prediction with disparate impact: A study of bias in recidivism prediction instruments. Big data , 5(2):153-163, 2017. | | [15] | Aditya Krishna Menon and Robert C Williamson. The cost of fairness in binary classification. In Conference on Fairness, accountability and transparency , pages 107-118. PMLR, 2018. | | [16] | Luca Oneto, Michele Doninini, Amon Elders, and Massimiliano Pontil. Taking advantage of multitask learning for fair classification. In Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society , pages 227-237, 2019. | | [17] | Dominik Zietlow, Michael Lohaus, Guha Balakrishnan, Matthäus Kleindessner, Francesco Locatello, Bernhard Schölkopf, and Chris Russell. Leveling down in computer vision: Pareto inefficiencies in fair deep classifiers. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages 10410-10421, 2022. | | [18] | F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research , 12:2825-2830, 2011. | |--------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | [19] | Tianqi Chen and Carlos Guestrin. Xgboost: A scalable tree boosting system. In Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining , pages 785-794, 2016. | | [20] | Hilde Weerts, Miroslav Dudak, Richard Edgar, Adrin Jalali, Roman Lutz, and Michael Madaio. Fairlearn: Assessing and improving fairness of ai systems. Journal of Machine Learning Research , 24(257):1-8, 2023. | | [21] | Michael Lohaus, Matthäus Kleindessner, Krishnaram Kenthapadi, Francesco Locatello, and Chris Russell. Are two heads the same as one? identifying disparate treatment in fair neural networks. Advances in Neural Information Processing Systems , 35:16548-16562, 2022. | | [22] | Sorelle A Friedler, Carlos Scheidegger, Suresh Venkatasubramanian, Sonam Choudhary, Evan P Hamilton, and Derek Roth. A comparative study of fairness-enhancing interventions in machine learning. In Proceedings of the conference on fairness, accountability, and transparency , pages 329-338, 2019. | | [23] | Muhammad Bilal Zafar, Isabel Valera, Manuel Gomez Rogriguez, and Krishna P Gummadi. Fairness constraints: Mechanisms for fair classification. In Artificial intelligence and statistics , pages 962-970. PMLR, 2017. | | [24] | Agathe Balayn, Christoph Lofi, and Geert-Jan Houben. Managing bias and unfairness in data for decision support: a survey of machine learning and data engineering approaches to identify and mitigate bias and unfairness within data management and analytics systems. The VLDB Journal , 30(5):739-768, 2021. | | [25] | Faisal Kamiran and Toon Calders. Data preprocessing techniques for classification without discrimination. Knowledge and information systems , 33(1):1-33, 2012. | | [26] | Flavio Calmon, Dennis Wei, Bhanukiran Vinzamuri, Karthikeyan Natesan Ramamurthy, and Kush R Varshney. Optimized pre-processing for discrimination prevention. Advances in neural information processing systems , 30, 2017. | | [27] | Joymallya Chakraborty, Suvodeep Majumder, and Tim Menzies. Bias in machine learning software: Why? how? what to do? In Proceedings of the 29th ACM joint meeting on European software engineering conference and symposium on the foundations of software engineering , pages 429-440, 2021. | | [28] | Vikram V Ramaswamy, Sunnie SY Kim, and Olga Russakovsky. Fair attribute classification through latent space de-biasing. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition , pages 9301-9310, 2021. | | [29] | Ran Zmigrod, Sabrina J Mielke, Hanna Wallach, and Ryan Cotterell. Counterfactual data augmentation for mitigating gender stereotypes in languages with rich morphology. arXiv preprint arXiv:1906.04571 , 2019. | | [30] | Michael Lohaus, Michael Perrot, and Ulrike Von Luxburg. Too relaxed to be fair. In Interna- tional Conference on Machine Learning , pages 6360-6369. PMLR, 2020. | | [31] | Alekh Agarwal, Alina Beygelzimer, Miroslav Dudík, John Langford, and Hanna Wallach. A reductions approach to fair classification. In International conference on machine learning , pages 60-69. PMLR, 2018. | | [32] | Brian Hu Zhang, Blake Lemoine, and Margaret Mitchell. Mitigating unwanted biases with adversarial learning. In Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society , pages 335-340, 2018. | | [33] | David Madras, Elliot Creager, Toniann Pitassi, and Richard Zemel. Learning adversarially fair and transferable representations. In International Conference on Machine Learning , pages 3384-3393. PMLR, 2018. | |--------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | [34] | Han Zhao, Amanda Coston, Tameem Adel, and Geoffrey J Gordon. Conditional learning of fair representations. arXiv preprint arXiv:1910.07162 , 2019. | | [35] | Byungju Kim, Hyunwoo Kim, Kyungsu Kim, Sungjin Kim, and Junmo Kim. Learning not to learn: Training deep neural networks with biased data. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition , pages 9012-9020, 2019. | | [36] | Enzo Tartaglione, Carlo Alberto Barbano, and Marco Grangetto. End: Entangling and disentangling deep representations for bias correction. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition , pages 13508-13517, 2021. | | [37] | Mhd Hasan Sarhan, Nassir Navab, Abouzar Eslami, and Shadi Albarqouni. Fairness by learning orthogonal disentangled representations. In Computer Vision-ECCV 2020: 16th European Conference, Glasgow, UK, August 23-28, 2020, Proceedings, Part XXIX 16 , pages 746-761. Springer, 2020. | | [38] | Shiori Sagawa, Pang Wei Koh, Tatsunori B Hashimoto, and Percy Liang. Distributionally robust neural networks for group shifts: On the importance of regularization for worst-case generalization. arXiv preprint arXiv:1911.08731 , 2019. | | [39] | Junbum Cha, Sanghyuk Chun, Kyungjae Lee, Han-Cheol Cho, Seunghyun Park, Yunsung Lee, and Sungrae Park. Swad: Domain generalization by seeking flat minima. Advances in Neural Information Processing Systems , 34:22405-22418, 2021. | | [40] | Pierre Foret, Ariel Kleiner, Hossein Mobahi, and Behnam Neyshabur. Sharpness-aware minimization for efficiently improving generalization. arXiv preprint arXiv:2010.01412 , 2020. | | [41] | Zeyu Wang, Klint Qinami, Ioannis Christos Karakozis, Kyle Genova, Prem Nair, Kenji Hata, and Olga Russakovsky. Towards fairness in visual recognition: Effective strategies for bias mitigation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition , pages 8919-8928, 2020. | | [42] | Yongshuo Zong, Yongxin Yang, and Timothy Hospedales. Medfair: Benchmarking fairness for medical imaging. In International Conference on Learning Representations (ICLR) , 2023. | | [43] | Xiaotian Han, Jianfeng Chi, Yu Chen, Qifan Wang, Han Zhao, Na Zou, and Xia Hu. FFB: A fair fairness benchmark for in-processing group fairness methods. In The Twelfth International Conference on Learning Representations , 2024. | | [44] | Geoff Pleiss, Manish Raghavan, Felix Wu, Jon Kleinberg, and Kilian Q Weinberger. On fairness and calibration. Advances in neural information processing systems , 30, 2017. | | [45] | Raman Dutt, Ondrej Bohdal, Sotirios Tsaftaris, and Timothy Hospedales. Fairtune: Opti- mizing parameter efficient fine tuning for fairness in medical image analysis. In The Twelfth International Conference on Learning Representations , 2024. | | [46] | Haonan Wang, Ziwei Wu, and Jingrui He. Fairif: Boosting fairness in deep learning via influence functions with validation set sensitive attributes. arXiv preprint arXiv:2201.05759v2 , 2024. | | [47] | MZehlike, C Castillo, F Bonchi, S Hajian, and MMegahed. Fairness measures: Datasets and software for detecting algorithmic discrimination. URL http://fairness-measures. org , 2017. | | [48] | Catherina Xu, Christina Greer, Manasi N Joshi, and Tulsee Doshi. Fairness indicators demo: Scalable infrastructure for fair ml systems, 2020. | | [49] | Kacper Sokol, Raul Santos-Rodriguez, and Peter Flach. Fat forensics: A python toolbox for algorithmic fairness, accountability and transparency. Software Impacts , 14:100406, 2022. | [50] Julius A Adebayo et al. Fairml: Toolbox for diagnosing bias in predictive modeling, 2016. | [51] | Florian Tramer, Vaggelis Atlidakis, Roxana Geambasu, Daniel Hsu, Jean-Pierre Hubaux, Mathias Humbert, Ari Juels, and Huang Lin. Fairtest: Discovering unwarranted associations in data-driven applications. In 2017 IEEE European Symposium on Security and Privacy (EuroS&P) , pages 401-416. IEEE, 2017. | |--------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | [52] | Pedro Saleiro, Benedict Kuester, Loren Hinkson, Jesse London, Abby Stevens, Ari Anisfeld, Kit T Rodolfa, and Rayid Ghani. Aequitas: A bias and fairness audit toolkit. arXiv preprint arXiv:1811.05577 , 2018. | | [53] | Niels Bantilan. Themis-ml: A fairness-aware machine learning interface for end-to-end discrimination discovery and mitigation. Journal of Technology in Human Services , 36(1):15- 30, 2018. | | [54] | Kate Crawford. The trouble with bias - nips 2017 keynote. https://www.youtube.com/ watch?v=fMym_BKWQzk , 2017. | | [55] | Michelle Seng Ah Lee and Jat Singh. The landscape and gaps in open source fairness toolkits. In Proceedings of the 2021 CHI conference on human factors in computing systems , pages 1-13, 2021. | | [56] | Maarten Buyl, MaryBeth Defrance, and Tijl De Bie. fairret: a framework for differentiable fairness regularization terms. In The Twelfth International Conference on Learning Represen- tations , 2024. | | [57] | André Cruz and Moritz Hardt. Unprocessing seven years of algorithmic fairness. In The Twelfth International Conference on Learning Representations , 2024. | | [58] | The pandas development team. pandas-dev/pandas: Pandas, February 2020. | | [59] | Natalia Martinez, Martin Bertran, and Guillermo Sapiro. Minimax pareto fairness: A multi objective perspective. In International Conference on Machine Learning , pages 6755-6764. PMLR, 2020. | | [60] | Chloé Bakalar, Renata Barreto, Stevie Bergman, Miranda Bogen, Bobbie Chern, Sam Corbett- Davies, Melissa Hall, Isabel Kloumann, Michelle Lam, Joaquin Quiñonero Candela, et al. Fairness on the ground: Applying algorithmic fairness approaches to production systems. arXiv preprint arXiv:2103.06172 , 2021. | | [61] | Sahil Verma and Julia Rubin. Fairness definitions explained. In Proceedings of the international workshop on software fairness , pages 1-7, 2018. | | [62] | Sanjiv Das, Michele Donini, Jason Gelman, Kevin Haas, Mila Hardt, Jared Katzman, Kr- ishnaram Kenthapadi, Pedro Larroy, Pinar Yilmaz, and Bilal Zafar. Fairness measures for machine learning in finance. The Journal of Financial Data Science , 2021. | | [63] | Cynthia Dwork, Moritz Hardt, Toniann Pitassi, Omer Reingold, and Richard Zemel. Fairness through awareness. In Proceedings of the 3rd innovations in theoretical computer science conference , pages 214-226, 2012. | | [64] | Matt J Kusner, Joshua Loftus, Chris Russell, and Ricardo Silva. Counterfactual fairness. Advances in neural information processing systems , 30, 2017. | | [65] | Silvia Chiappa. Path-specific counterfactual fairness. In Proceedings of the AAAI conference on artificial intelligence , volume 33, pages 7801-7808, 2019. | | [66] | Emily Diana, Wesley Gill, Michael Kearns, Krishnaram Kenthapadi, and Aaron Roth. Min- imax group fairness: Algorithms and experiments. In Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society , pages 66-76, 2021. | | [67] | Jacob DAbernethy, Pranjal Awasthi, Matthäus Kleindessner, Jamie Morgenstern, Chris Russell, and Jie Zhang. Active sampling for min-max fairness. In Kamalika Chaudhuri, Stefanie Jegelka, Le Song, Csaba Szepesvari, Gang Niu, and Sivan Sabato, editors, Proceedings of the 39th International Conference on Machine Learning , volume 162 of Proceedings of Machine Learning Research | | [68] | Sandra Wachter, Brent Mittelstadt, and Chris Russell. Why fairness cannot be automated: Bridging the gap between eu non-discrimination law and ai. Computer Law &Security Review , 41:105567, 2021. | |--------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | [69] | Jieyu Zhao, Tianlu Wang, Mark Yatskar, Vicente Ordonez, and Kai-Wei Chang. Men also like shopping: Reducing gender bias amplification using corpus-level constraints. In EMNLP , 2017. | | [70] | Angelina Wang and Olga Russakovsky. Directional bias amplification. In International Conference on Machine Learning , pages 10882-10893. PMLR, 2021. | | [71] | Nick Erickson, Jonas Mueller, Alexander Shirkov, Hang Zhang, Pedro Larroy, Mu Li, and Alexander Smola. Autogluon-tabular: Robust and accurate automl for structured data. arXiv preprint arXiv:2003.06505 , 2020. | | [72] | Sandra Wachter, Brent Mittelstadt, and Chris Russell. Bias preservation in machine learning: the legality of fairness metrics under eu non-discrimination law. W. Va. L. Rev. , 123:735, 2020. | | [73] | Sergey E Golovenkin, Jonathan Bac, Alexander Chervov, Evgeny MMirkes, Yuliya V Orlova, Emmanuel Barillot, Alexander N Gorban, and Andrei Zinovyev. Trajectories, bifurcations, and pseudo-time in large clinical datasets: applications to myocardial infarction and diabetes data. GigaScience , 9(11):giaa128, 2020. | | [74] | Mohsan Alvi, Andrew Zisserman, and Christoffer Nellåker. Turning a blind eye: Explicit removal of biases and variation from deep neural network embeddings. In Proceedings of the European Conference on Computer Vision (ECCV) Workshops , pages 0-0, 2018. | | [75] | Amelie Royer and Christoph H Lampert. Classifier adaptation at prediction time. In Proceed- ings of the IEEE Conference on Computer Vision and Pattern Recognition , pages 1401-1409, 2015. | | [76] | Manisha Padala and Sujit Gujar. Fnnc: Achieving fairness through neural networks. In Proceed- ings of the Twenty-Ninth International Joint Conference on Artificial Intelligence, { IJCAI-20 } , International Joint Conferences on Artificial Intelligence Organization , 2020. | | [77] | Michael Wick, Jean-Baptiste Tristan, et al. Unlocking fairness: a trade-off revisited. Advances in neural information processing systems , 32, 2019. | | [78] | Ching-Yao Chuang and Youssef Mroueh. Fair mixup: Fairness via interpolation. In Interna- tional Conference on Learning Representations , 2021. | | [79] | Ziwei Liu, Ping Luo, Xiaogang Wang, and Xiaoou Tang. Deep learning face attributes in the wild. In Proceedings of the IEEE international conference on computer vision , pages 3730-3738, 2015. | | [80] | Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition , pages 770-778, 2016. | | [81] | Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large- scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition , pages 248-255. Ieee, 2009. | | [82] | Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: a simple way to prevent neural networks from overfitting. The journal of machine learning research , 15(1):1929-1958, 2014. | | [83] | Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 , 2014. | | [84] | Xiaolei Huang, Linzi Xing, Franck Dernoncourt, and Michael Paul. Multilingual twitter corpus and baselines for evaluating demographic bias in hate speech recognition. In Proceedings of the Twelfth Language Resources and Evaluation Conference , pages 1440-1448, 2020. | | [85] | Jigsaw unintended bias in toxicity classification, 2018. | |--------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | [86] | Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 , 2018. | | [87] | Emily Dinan, Angela Fan, Adina Williams, Jack Urbanek, Douwe Kiela, and Jason Weston. Queens are powerful too: Mitigating gender bias in dialogue generation. arXiv preprint arXiv:1911.03842 , 2019. | | [88] | Kellie Webster, Xuezhi Wang, Ian Tenney, Alex Beutel, Emily Pitler, Ellie Pavlick, Jilin Chen, Ed Chi, and Slav Petrov. Measuring and reducing gendered correlations in pre-trained models. arXiv preprint arXiv:2010.06032 , 2020. | | [89] | Soumya Barikeri, Anne Lauscher, Ivan Vuli´, c and Goran Glavaš. Redditbias: A real-world resource for bias evaluation and debiasing of conversational language models. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers) , pages 1941-1955, 2021. | | [90] | Nicholas Meade, Elinor Poole-Dayan, and Siva Reddy. An empirical survey of the effectiveness of debiasing techniques for pre-trained language models. arXiv preprint arXiv:2110.08527 , 2021. | | [91] | Yi Li and Nuno Vasconcelos. Repair: Removing representation bias by dataset resampling. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition , pages 9572-9581, 2019. | | [92] | Agathe Balayn, Mireia Yurrita, Jie Yang, and Ujwal Gadiraju. ' ✓ □ fairness toolkits, a checkbox culture?' on the factors that fragment developer practices in handling algorithmic harms. In Proceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society , pages 482-495, 2023. | | [93] | Michaela Hardt, Xiaoguang Chen, Xiaoyi Cheng, Michele Donini, Jason Gelman, Satish Gollaprolu, John He, Pedro Larroy, Xinyu Liu, Nick McCarthy, et al. Amazon sagemaker clarify: Machine learning bias detection and explainability in the cloud. In Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery &Data Mining , pages 2974-2983, 2021. | | [94] | Harvineet Singh, Matthäus Kleindessner, Volkan Cevher, Rumi Chunara, and Chris Russell. When do minimax-fair learning and empirical risk minimization coincide? In Andreas Krause, Emma Brunskill, Kyunghyun Cho, Barbara Engelhardt, Sivan Sabato, and Jonathan Scarlett, editors, Proceedings of the 40th International Conference on Machine Learning , volume 202 of Proceedings of Machine Learning Research , pages 31969-31989. PMLR, 23-29 Jul 2023. | | [95] | Sofie Goethals, Eoin Delaney, Brent Mittelstadt, and Chris Russell. Resource-constrained fairness. arXiv preprint arXiv:2406.01290 , 2024. | | [96] | Matthew Groh, Caleb Harris, Luis Soenksen, Felix Lau, Rachel Han, Aerin Kim, Arash Koochek, and Omar Badri. Evaluating deep neural networks trained on clinical images in dermatology with the fitzpatrick 17k dataset. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages 1820-1828, 2021. | | [97] | Kweku Kwegyir-Aggrey, Jessica Dai, A Feder Cooper, John Dickerson, Keegan Hines, and Suresh Venkatasubramanian. Repairing regressors for fair binary classification at any decision threshold. In NeurIPS 2023 Workshop Optimal Transport and Machine Learning , 2023. | | [98] | Michael Kearns, Seth Neel, Aaron Roth, and Zhiwei Steven Wu. Preventing fairness gerryman- dering: Auditing and learning for subgroup fairness. In International conference on machine learning , pages 2564-2572. PMLR, 2018. | | [99] | David Freedman, Robert Pisani, and Roger Purves. Statistics . W. W. Norton &Company, New York, fourth edition, 2007. Hardcover, 700 pages. | | [100] | Peter J Bickel, Eugene A Hammel, and J William O'Connell. Sex bias in graduate admissions: Data from berkeley: Measuring bias is harder than is usually assumed, and the evidence is sometimes contrary to expectation. Science , 187(4175):398-404, 1975. | |---------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | [101] | Frances Ding, Moritz Hardt, John Miller, and Ludwig Schmidt. Retiring adult: New datasets for fair machine learning. Advances in Neural Information Processing Systems , 34, 2021. | | [102] | Pranjal Awasthi, Matthäus Kleindessner, and Jamie Morgenstern. Equalized odds postpro- cessing under imperfect group information. In Silvia Chiappa and Roberto Calandra, editors, Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statis- tics , volume 108 of Proceedings of Machine Learning Research , pages 1770-1780. PMLR, 26-28 Aug 2020. | | [103] | Vladimir N Vapnik. An overview of statistical learning theory. IEEE transactions on neural networks , 10(5):988-999, 1999. | | [104] | Rich Zemel, Yu Wu, Kevin Swersky, Toni Pitassi, and Cynthia Dwork. Learning fair represen- tations. In International conference on machine learning , pages 325-333. PMLR, 2013. | | [105] | Drago Pleˇko c and Nicolai Meinshausen. Fair data adaptation with quantile preservation. Journal of Machine Learning Research , 21(242):1-44, 2020. | | [106] | Toshihiro Kamishima, Shotaro Akaho, Hideki Asoh, and Jun Sakuma. Fairness-aware classifier with prejudice remover regularizer. In Machine Learning and Knowledge Discovery in Databases: European Conference, ECML PKDD 2012, Bristol, UK, September 24-28, 2012. Proceedings, Part II 23 , pages 35-50. Springer, 2012. | | [107] | Elizabeth Anne Watkins, Michael McKenna, and Jiahao Chen. The four-fifths rule is not disparate impact: a woeful tale of epistemic trespassing in algorithmic fairness. arXiv preprint arXiv:2202.09519 , 2022. | <!-- image --> <!-- image --> Frontier Found <!-- image --> <!-- image --> Frontier Found Figure 6: We show a full comparison of the methods provided by AIF360 and Fairlearn on the adult dataset with 4 different choices of metric (accuracy, balanced accuracy, F1, and Matthews correlation coefficient (MCC)), while enforcing demographic parity. We follow the design decisions of [9] and use a random forest with 100 trees and a minimum leaf size of 20. Only OxonFair allows the specification of an objective, and for all other methods we try to alter the decision threshold to better optimize the objective. However, as we improve the objective, we see fairness deteriorates. To avoid this, OxonFair jointly optimize both a fairness measure and an objective. ## A Inferred characteristics In many situations, protected attributes are not available at test time. In this case, we simply use inferred characteristics to assign per-group thresholds and adjust these thresholds to guarantee fairness with respect to the true (i.e. uninferred) groups. When using inferred characteristics, we offer two pathways for handling estimated group membership. The first pathway we consider makes a hard assignment of individuals to groups, based on a classifier response. The second pathway explicitly uses the classifier confidence as part of a per-datapoint threshold. In practice, we find little difference between the two approaches, but the hard assignment to groups is substantially more efficient and therefore allows for a finer grid search and generally better performance. However, the soft assignment remains useful for the integration of our method with neural networks, where we explicitly merge two heads of a neural network to arrive at a single fair model. ## A.1 Fast pathway The fast pathway closely follows the efficient grid search for known characteristics. We partition the dataset by inferred characteristics, and then repeat the trick. However, as the inferred characteristics do not need to perfectly align with the true characteristics, we also keep track of the true group datapoints belongs to, i.e., for all datapoints assigned to a particular inferred group, we compute the cumulative sum of positives and negatives that truly belong to each group. This allows us to vary the thresholds with respect to inferred groups while computing group measures with respect to the true groups. This can be understood as replacing the decision function (1) with f ( x ) -t · G x ′ ( ) ≥ 0 where G ′ is a binary vector valued function that sums to 1, but need not correspond to G exactly. This explicit decoupling of inferred groups from the true group membership allows us to consider partitionings of the data that do not align with group membership. We found it particularly helpful to include an additional 'don't know' group. By default, any datapoint assigned a score 10 from the classifier below 2 / 3 is assigned to this group, and receives a different threshold to those datapoints that the classifier is confident about. The improved frontiers are shown in the tabular experimental section as OxonFair+, where they offer a clear advantage over our baseline OxonFair. ## A.2 Slow pathway The slow pathway tunes t to optimize the decision process f ( x ) -t · g x ( ) ≥ 0 , where g is a real vector valued function. Given the lack of assumptions, no obvious speed-up was possible and we perform a two stage naïve grid-search, first coarsely to extract an approximate Pareto frontier, and then a finer search over the range of thresholds found in the first stage. This is then followed by a final interpolation that checks for candidates around pairs of adjacent candidates currently in the frontier. In situations where g x ( ) is the output of a classifier and G x ′ ( ) its binarization, it is reasonable to suspect that loss of information from binarization might lead to a drop in performance when we compare the slow pathway with the fast. In practice, we never found a significant change, and in a like-with-like comparison over a similar number of thresholds the fast pathway was as likely to be fractionally better as it was to be worse. Moreover, for more than 3 groups the slow pathway becomes punitively slow, and to keep the runtime acceptable requires decreasing the grid size in a way that harms performance. Despite this, we kept the slow pathway as it is directly applicable to deep networks as we describe in the next section. In practice, when working with deep networks we make use of a hybrid approach, and perform the fast and slow grid searches before fusing them into a single frontier and then performing interpolation. This allows us to benefit from the better solutions found by a fine grid search when the output of the second head is near binary (see Figure 2), and robustly carry over to the slower pathway where its binarization is a bad approximation of the network output. ## B Implementation of Performance and Fairness Measures To make OxonFair readily extensible, we create a custom class to implement all performance and fairness measures. As such, should OxonFair not support a particular measure, both the objectives and constraints can be readily extended by the end user. Measures used by OxonFair are defined as instances of a python class GroupMetrics . Each group measure is specified by a function that takes the number of True Positives, False Positives, False Negatives, and True Negatives and returns a score; A string specifying the name of the measure; and optionally a bool indicating if greater values are better than smaller ones. For example, accuracy is defined as: accuracy = gm.GroupMetric(lambda TP, FP, FN, TN: (TP + TN) / (TP + FP + FN + TN), 'Accuracy') For efficiency, our approach relies on broadcast semantics and all operations in the function must be applicable to numpy arrays. Having defined a GroupMetric it can be called in two ways. Either: accuracy(target\_labels, predictions, groups) Here target\_labels and predictions are binary vectors corresponding to either the target groundtruth values, or the predictions made by a classifier, with 1 representing the positive label and 0 otherwise. groups is simply a vector of values where each unique value is assumed to correspond to a distinct group. The other way it can be called is by passing it a single 3D array of dimension 4 by number of groups by k, where k is the number of candidate classifiers that the measure should be computed over. 10 User controllable threshold. Table 5: The fairness measures in the review of [61]. All 9 group metrics that concern the decisions made by a classifier are supported by OxonFair. | Vema and Rubin [61] Metrics | OxonFair name | Fairlearn | |---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------------------------------| | Group fairness or statistical parity Conditional statistical parity Predictive parity False positive error rate balance False negative error rate balance Equalized odds Conditional use accuracy equality Overall accuracy equality Treatment equality Test-fairness or calibration Well calibration Balance for positive class Balance for negative class Causal discrimination Fairness through unawareness Fairness through awareness No unresolved discrimination No proxy discrimination Fair inference | demographic_parity conditional_group_metrics. pos_pred_rate.diff predictive_parity false_pos_rate.diff false_neg_rate.diff equalized_odds cond_use_accuracy accuracy.diff treatment.diff Not decision based Not decision based Not decision based Not decision based Individual fairness Individual fairness Individual fairness Individual fairness Individual fairness Individual fairness | Yes No No Yes Yes Yes No No No | As a convenience, GroupMetrics automatically implements a range of functionality as sub-objects. Having defined a metric as above, we have a range of different objects: - · metric.diff reports the average absolute difference of the method between groups. - · metric.average reports the average of the method taken over all groups. - · metric.max\_diff reports the maximum difference of the method between any pair of groups. - · metric.max reports the maximum value for any group. - · metric.min reports the minimum value for any group. - · metric.overall reports the overall value for all groups combined, and is the same as calling metric directly - · metric.ratio reports the average over distinct pairs of groups of the smallest value divided by the largest - · metric.per\_group reports the value for every group. All of these can be passed directly to fit, or to the evaluation functions we provide. The vast majority of fairness metrics are implemented as a .diff of a standard performance measure, and by placing a .min after any measure such as recall or precision it is possible to add constraints that enforce that the precision or recall is above a particular value for every group. Total Metrics Computing certain metrics, particular Conditional Demographic Disparity [68], and Bias Amplification [70], requires knowledge of the total number of TP etc. alongside the number of TP in each group. To implement these measures, we support lambda functions that take per-group values, followed by the global values, giving 8 arguments in total. These lambda functions should be passed to GroupMetric in the same way along ith the optional argument total\_metric=True . Table 6: The post-training fairness measures in the review of [93]. All measures are supported by OxonFair. | Post-training Metrics [93] | OxonFair name | Fairlearn | |-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------------------------| | Diff. in pos. proportions in predicted labels Disparate Impact Difference in Conditional Acceptance Difference in Conditional Rejection Accuracy Difference Recall Difference Difference In Acceptance Rates Difference in Rejection Rates Treatment Equality Conditional Demographic Disparity | demographic_parity disparate_impact cond_accept.diff cond_reject.diff accuracy.diff recall.diff acceptance_rate.diff rejection_rate.diff treatment_equality conditional_group_metrics. pos_pred_rate.diff | Yes No No No No Yes No No No No | Table 7: Enforcing fairness for all definitions in [93] on COMPAS with inferred attributes. We enforce the fairness definitions with respect to three racial groups, African American, Caucasian, and Other - consisting of all other labelled ethnicities. There are a total 350 individuals labelled 'Other' in the test set, making most metrics of fairness unstable and difficult to enforce. Nonetheless, we improve on all metrics. For all metrics except disparate impact, we enforce that the score on train is below 2.5% and for disparate impact we enforce that the score on train is above 97.5%. XGBoost is used as the base classifier, and the dataset is split into 70% train and 30% test. | | Measure (original) | Measure (updated) | Accuracy (original) | Accuracy (updated) | |-------------------------------------------|----------------------|---------------------|-----------------------|----------------------| | Demographic Parity | 0.148706 | 0.097142 | 0.661345 | 0.620588 | | Disparate Impact | 0.668305 | 0.74094 | 0.661345 | 0.605042 | | Difference in Conditional Acceptance Rate | 0.231862 | 0.151159 | 0.661345 | 0.642857 | | Difference in Conditional Rejectance Rate | 0.048625 | 0.025138 | 0.661345 | 0.655882 | | Difference in Accuracy | 0.013172 | 0.006351 | 0.661345 | 0.665546 | | Difference in Recall | 0.15121 | 0.105154 | 0.661345 | 0.612185 | | Difference in Acceptance Rate | 0.070072 | 0.066591 | 0.661345 | 0.662605 | | Difference in Specificity | 0.09749 | 0.064139 | 0.661345 | 0.660504 | | Difference in Rejection Rate | 0.050085 | 0.050215 | 0.661345 | 0.661345 | | Treatment Equality | 0.201717 | 0.105115 | 0.661345 | 0.660924 | | Conditional Demographic Parity | 0.150927 | 0.073203 | 0.661345 | 0.626471 | ## C Additional Metrics To demonstrate OxonFair's versatility, Tables 5 and 6 show the metrics of two review papers and how many can are implemented out of the box by our approach. An example showing how all clarify metrics can be enforced using inferred groups, and three group labels on compas can be seen in Table 7. ## C.1 Minimax Fairness Minimax fairness [59, 66, 67] refers to the family of methods which minimize the loss of the group where the algorithm performs worst, i.e., they minimize the maximal loss. [94] observed that sufficiently expressive classifiers, such as those considered by this paper, including boosting, random forests, or deep networks on image and NLP tended to be per group optimal, when the groups do not correspond to the predicted label. As such they are already minimax optimal and the solutions found by minimax fairness methods are indistinguishable from those found by empiric risk minimization. This still leaves the case where groups include the label (for example, the groups may correspond to the product of gender and the variable we are trying to predict, such as sick or not sick). In this case, as convincingly shown by [59], the solutions found do not correspond to ERM. Here, we compare OxonFair against minimax fairness. To do this, we define a new performance measure corresponding to the lowest accuracy over the positive or negative labelled datapoints. <!-- formula-not-decoded --> Martinez et al [59] argued that we should seek a Pareto optimal solution that has the highest possible overall accuracy, subject to the requirement it maximizes the lowest per group accuracy. We can do this in OxonFair by calling fpredictor.fit(gm.min\_accuracy.min, gm.accuracy, 0) Here min\_accuracy.min corresponds to the lowest min accuracy of any group. We use accuracy &gt; 0 as the constraint, as we do not want an active constraint from preventing us from finding the element of the Pareto frontier (see [59] for frontier details) with the highest minimum accuracy. Note that the groups used by OxonFair with this loss correspond to the true groups, such as ethnicity or gender, while the groups used by minimax fairness are the product of these groups with the target labels. Existing methods for minimax fairness optimize the same loss and have indistinguishable accuracy, only differing in the speed of convergence[67] As such, in Table 8 we only report results for a variant of [67]. Similarly, in computer vision [17], optimized the same objective by iteratively generating synthetic data for the worst performing group, where groups were defined as the product of ground-truth labels, and sex. We compare against them in Table 15. Table 8: Results for XGBoost: Adult (sex) | XGBoost: Adult (sex) | Min Accuracy | Overall Accuracy | |--------------------------|----------------|--------------------| | ERM Training | 70.3% | 90.9% | | Minimax Training [67]. | 85.2% | 88.9% | | ERM Validation | 58.8% | 86.9% | | Minimax Validation [67]. | 76.2% | 83.9% | | OxonFair Validation | 79.1% | 84.4% | | ERM Test | 59.6% | 86.6% | | Minimax Test [67]. | 77.9% | 84.1% | | OxonFair Test | 80.5% | 84.6% | ## C.2 Utility Optimization OxonFair supports the utility-based approach of Bakalar et al. [60], whereby different thresholds can be selected per group to optimize a utility-based objective. Utility functions can be defined in one line. In the following example, we consider a scenario where an ML system identifies issues that may require interventions. In this example, every intervention has a cost of 1, regardless of if it was needed, but a missed intervention that was needed has a cost of 5. Finally, not making an intervention when one was not needed has a cost of 0. Figure 7 shows code where fpredictor minimizes the utility subject to the requirement that the overall recall cannot drop below 0.5. Figure 7: Defining and optimizing a custom utility function with OxonFair [60]. ## C.3 Levelling up One criticism of many methods of algorithmic fairness is that enforcing equality of recall rates (as in equal opportunity) or selection rates (as in demographic parity) will decrease the recall/selection rate for some groups while increasing it for others. This behavior is an artifact of trying to maximize accuracy [8] and occurs despite fairness methods altering the overall selection rate [95]. As an alternative, OxonFair supports levelling up where harms are reduced to, at most, a given level per group [8]. For example, if we believe that black patients are being disproportionately harmed by a high number of false negatives in cancer detection (i.e., low recall), instead of enforcing that these properties be equalized across groups, we can instead require that every group of patients has, at least, a minimum recall score. Depending on the use case, similar constraints can be imposed in with respect to per-group minimal selection rates, or minimal precision. These constraints can be enforced by a single call, for example, enforcing that the precision is above 70% while otherwise maximizing accuracy can be enforced by calling: .fit(gm.accuracy, gm.precision.min, 0.7) . See also Figure 4. <!-- image --> Figure 8: Levelling-up with OxonFair by imposing a minimum group recall of 0.7 on the Fitzpatrick17k [96] validation set fpredictor.fit(gm.accuracy, gm.recall.min, 0.7) . To enforce levelling up as a hard additional constraint, fit takes an optional argument, force\_levelling\_up . Setting this equal to '+' forces the selection rate to increase in the search procedure, while setting it equal to '-' means it can only decrease. As most performance metrics underlying fairness constraints (e.g. recall, selection rate, precision) are monotonic with respect to the selection rate [95], this can prevent levelling down. The levelling-up constraint can be combined with standard forms of fairness e.g. by calling .fit(gm.accuracy, gm.demographic\_parity, 0.01, force\_levelling\_up='+') but they are most useful when using levelling up constraints in conjuncture with an inadequately optimized classifier. In some circumstances calling .fit(gm.accuracy, gm.recall.min, 0.6) can result in a decrease in recall rate for some groups providing this increases accuracy and does not drop below the recall below the specified minimal rate. ## C.4 Fairness under constrained capacity When deploying fairness in practice, we may be capacity limited. For example, as in Figure 8 we may use the output of a classifier for detecting cancer to schedule follow-up appointments. In such a case, you might wish that the recall is high for each demographic group, but be constrained by the number of available appointments. Calling .fit(gm.recall.min, gm.pos\_pred\_rate, 0.4, greater\_is\_better\_const=False) will maximize the recall on the worst-off group subject to a requirement that no more than 40% of cases are scheduled follow-up appointments. In general, maximizing the group minimum of any measure that is monotone with respect to the selection rate, while enforcing a hard limit on the selection rate will enforce equality with respect to that measure (e.g. optimizing gm.recall.min will result in equal recall a.k.a. equal opportunity, while maximizing gm.pos\_pred\_rate.min will result in demographic parity), while also enforcing the selection rate constraints. See [95] for proof and a discussion of the issues arising, and [97] for an alternate approach. As such, calling .fit(gm.recall.min, gm.pos\_pred\_rate, k, greater\_is\_better\_const = False) will enforce equal opportunity at k % selection rate, and .fit(gm.pos\_pred\_rate.min, gm.pos\_pred\_rate, 0.4, greater\_is\_better\_const = False) will enforce demographic parity at k % selection rate. ## C.5 Conditional Metrics A key challenge of using fairness in practice is that often some sources of bias are known, and the practitioner is expected to determine if additional biases exist and to correct for them. For example, someone's salary affects which loans they are eligible for, but salary has a distinctly Figure 9: Solutions found when enforcing demographic parity with varying rate constraints. See Appendix C.4. Left: the change in precision as we enforce demographic parity. Note that we report precision as it is more informative than accuracy for low selection rates. Right: The ratio between selection rates (i.e. disparate impact) for different groups. We report the ratio rather than the difference, as the difference tends to zero as the selection rate also tends to zero. However, as the right figure shows, this ratio becomes unstable as the rate tends to zero. <!-- image --> <!-- image --> different distribution for different ethnicities and genders. [65]. Identifying and correcting fairness here rapidly becomes challenging, when considering the intersection of attributes, many small groups arise and purely by chance some form of unfairness may be observed [98, 68]suggested the use of a technique from descriptive statistics that [99] had previously applied to the problem of schools admissions at Berkley [100]. In this famous example, every school in Berkley showed little gender bias, but due to different genders applying at different rates to different schools, and the schools themselves having substantially different acceptance rate, a strong overall gender bias was apparent. [99] observed that you could correct for this bias by computing the per school selection-rate, and then taking a weighted average, where the weights are given by the total number of people applying to the school. The resulting selection rates are equivalent to a weighted selection-rate over the whole population, where the weight w i for an individual i in a particular group and applying to a particular school is w i = # individuals in school # individuals in group and school . To enforce this form of conditional demographic parity in OxonFair, we simply replace the sum of true positives etc. in Section 3, with the weighted sum. We support a range of related fairness metrics, including conditional difference in accuracy; and conditional equal opportunity (note that for equal opportunity we replace the numbers used to compute w i with the same counts but only taking into account those that have positive ground-truth labels). As such metrics can level down (Appendix C.3), we also support conditional minimum selection rates, and conditional minimum recall. In addition to this, we support the conditional fairness metrics based on EU/UK law of [68]. These measure the (weighted) proportion of people belonging to a particular group in the set of advantaged or disadvantaged people. These metrics can be computed for both the input target variables a classifier is trained to predict and for the classifier outputs. ## C.6 Bias Amplification Metrics We also support variants of Bias Amplification, as defined by Zhao et al. and Wang et al. [69, 70]. As with the other metrics, we focus on scenarios where ground-truth group assignments exist even if they are unavailable at test time, and as such we focus on the Attribute → Task Bias (see [70]). Bias Amplification is implemented as a per-group metric, gm.bias\_amplification . As this measure is signed (a negative measure indicates the classifier reverses the bias present in the dataset), directly optimising it results in classifiers strongly biased in a new direction. Instead, we minimize the per group absolute bias amplification. The derivation is given below. Following the notation of Wang et al. [70], let A be the set of protected demographic groups: for example, A = { male, female } . A a for a ∈ A is the binary random variable corresponding to the presence of the group a ; thus P A ( woman = 1) can be empirically estimated as the fraction of images in the dataset containing women. Let T t with t ∈ T similarly correspond to binary target tasks. Let ˆ A a and ˆ T t denote model predictions for the protected group a and the target task t , respectively. <!-- formula-not-decoded --> Of which, the Attribute → Task Bias is relevant here. Each component can be written as a function of the global True Positives, False Positives etc., and the per group True Positives, and as such it can be optimized by our framework, albeit, not by using a standard group metrics. However, this metric is gamable, and consistently underestimating labels in groups where they are overrepresented and vice versa would be optimal, but undesirable behavior that leads to a negative score. Instead, we consider the absolute BiasAmp: <!-- formula-not-decoded --> We can decompose | ∆ at | into the appropriate form for a GroupMetric (see Appendix B) as follows: <!-- formula-not-decoded --> <!-- formula-not-decoded --> <!-- formula-not-decoded --> This will give a per group estimate of the absolute bias amplification, and calling its .average method will give the absolute bias amplification over all groups. ## D Comparisons with specialist methods ## D.1 Fairret Fairret [56] is a recently published toolkit for enforcing fairness in pytorch networks, using standard fairness constraints [23]. This toolkit has only been shown to work for tabular data. There are mathematical reasons to think that it will not work for bias preserving fairness metrics [72], such as equalized opportunity, on computer vision or NLP classifiers where classifiers obtain zero error on the training set [17], and therefore trivially satisfy all bias preserving fairness metrics. However, as Fairret uses standard relaxations, it should have comparable accuracy/fairness trade-offs to OxonFair when weakly enforcing demographic parity on image data (see [21]). Fairret should also have comparable performance on NLP data to the DP and EO regularized approaches reported in the main body of the paper. Here we focus on using Fairret to enforce Equal Opportunity on tabular data as shown in their paper. Figure 10 shows a comparison with Oxonfair. For a like-with-like comparison between Fairret and OxonFair we use 70% of the data for training for Fairret (which requires no validation data), and for OxonFair split this into 40% training data and 30% validation data. Figure 10: Comparing OxonFair and Fairret [56] on adult using sex as the protected attribute. A simple neural network classifier with two hidden layers is used as the base classifier. Fairness strengths are varied for different Fairret implementations. Difference in Equal Opportunity and Demographic Parity are considered. An OxonFair-based frontier using XGBoost is also displayed. <!-- image --> <!-- image --> In general, neural networks perform worse than boosting on tabular data, and should not be used where maximizing accuracy is a concern. Nonetheless, we see that Fairret shows worse accuracy/fairness trade-offs than OxonFair on neural networks with the same architecture. Compared to OxonFair, there are three challenges faced by Fairret that might be contributing to its worse performance. - · The enforced constraints are a relaxation of the underlying integer fairness constraints, and even when completely satisfied, they need not imply fairness [30]. - · A mismatch between errors on the training and test set. Even when models do not obtain zero training error they can still overfit, and minimizing equal opportunity on the training set does not imply that it is optimized on the test set. - · Failure to converge. To induce stability in the performance of Fairret we had to use a much larger minibatch size of 1000, as the minibatch must be large enough to estimate the EO violation somewhat stably if we want the method to converge. It is possible that either the batch size was still not large enough or that it was so large that it caused issues with optimizing the log loss. However, other issues might be the reason for the performance discrepancy. ## D.2 Specialist Equalized Odds Solvers In this section, we show how OxonFair can be adapted through the use of custom groups to mimic the performance of Specialist Equalized Odds solvers. We compare against the recently published [57], which was shown to outperform a wide range of existing methods. Like us, [57] assigns thresholds on a per-group basis to enforce fairness, and enforces fairness up to a prespecified threshold (e.g., equalized odds is less than 5%) while maximizing accuracy. Unlike the default behaviour of Oxonfair, however, it randomly assigns members of each group one of two different thresholds. To mimic this behaviour, we make use of the same trick used in Section A.1, namely that the 'inferred groups' used to assign thresholds are not expected to align with the true groups, and that we can introduce additional groups to increase the expressiveness of the model. Before showing how to enforce Equalized Odds, we strongly recommend that this is not used in practice. [57] also notes that enforcing it is contentious. Thresholding methods such as OxonFair and [57] can be understood as methods that trade-off sensitivity or recall against specificity. When enforcing Equalized odds, we will move to a new point on the sensitivity specificity curve, for the worst performing group, while degrading the performance for all other groups. Viewed through the lens of levelling up [8], and which specific harms are incurred by each group, versus a base classifier: enforcing equalized odds will increase one of the sensitivity and specificity for the worst performing group and decrease the other; and may decrease sensitivity and specificity for all other <!-- image --> <!-- image --> Accuracy Figure 11: Left: A close-up of possible per-group trade-offs when enforcing Equalized odds. This figure shows possible behaviour when enforcing Equalized odds with respect to sex on the adult dataset. Compared to the original predictor, we see a substantial decrease in recall for the worst performing group accompanied by a small increase in specificity. For the better performing group, both recall and specificity are decreased in order to enforce fairness. Right: The accuracy/fairness trade-offs of OxonFair using single threshold, deterministic multi-thresholds, and randomized multithresholds. Single threshold performs substantially worse for strong fairness constraints, but the other two strategies are interchangeable. All results shown are on validation data. groups. (see Figure Figure 11 for an illustration of the frontier found by OxonFair with respect to recall and specificity). While other methods are harder to analyze than thresholding, the fact that they have worse accuracy/fairness trade-offs suggests that they deteriorate classifiers more than simple thresholding. We consider four threshold sets for OxonFair. - 1. Default OxonFair using one threshold per group. - 2. A randomized version that mimics the behaviour of [57], by assigning members of each group to two subgroups with 50/50 probability. - 3. As randomized approaches can be criticized for their instability, we also consider a deterministic version that flips the scores of positively labelled data points scored below a threshold, and also flips the score of negatively labelled points scored above a threshold. - 4. An inferred variant that does the same as 3, but doesn't require access to groups at test time. Additional code for the last three options is shown below. These functions can be passed to FairPredictor using the inferred\_groups option. Our comparison with [57] can be summarized as the randomized and deterministic methods are broadly interchangeable both with [57] (see Figure 12) and with each other (see Figure 11 right), and they strongly outperform the default single-threshold OxonFair for strong fairness constraints. As both [57] and multi-threshold OxonFair solve the same optimization problem via different routes (a specialist LP solver for [57], grid-search for OxonFair) we expect them to obtain similar solutions, and this we find in Figure 12. However, [57] makes use of a formulation that is more efficient for large numbers of thresholds, but less expressive and harder to adapt to infered group membership, or new fairness or performance constraints. As such, it is unsurprising that it is substantially more efficient than grid-search over the 8 thresholds used in these experiments - at least when finding a solution with e.g. an Eodds violation of no more than 0.05. What was more surprising was that when using the code of [57] to compute an entire fairness/accuracy Pareto frontier, Oxonfair remained more efficient. See Table 9 for details. This remains a reminder that big-O notation is uninformative with respect to runtime, when our datasets are extremely limited in their number of groups. | Method | [57] (single fairness eval) | [57] (Complete Frontier) | OxonFair Complete Frontier | |----------|-------------------------------|----------------------------|------------------------------| | Runtime | 3 sec | 4m49.1 sec | 1m18s | Table 9: Runtime comparison between [57] and OxonFair, on the Folktables dataset [101], using four racial groups. This represents the largest problem with the most groups reported in [57], and as such is the experiment where we would expect [57] to outperform OxonFair the most with respect to runtime. While [57] is faster if enforcing fairness to a known amount, e.g., maximizing accuracy subject to EOdds&lt;0.05%, OxonFair remains faster for computing the entire frontier. <!-- image --> <!-- image --> Figure 12: A comparison between single-threshold OxonFair and deterministic multi-threshold OxonFair with [57]. As expected, Multi-threshold is directly comparable to [57], with single threshold performing worse. Table 10: A comparison of Fairlearn, and OxonFair when enforcing Equalized Odds. We enforce fairness at 2% on validation to roughly match accuracy with Fairlearn. At this value, there is limited change between versions of Oxonfair and the multi-threshold approach obtains half the fairness violation of Fairlearn at similar accuracy. | | Original | OxonFair Multi | OxonFair + | FairLearn | |----------------|------------|------------------|--------------|-------------| | Accuracy | 0.871016 | 0.866921 | 0.85996 | 0.866512 | | Equalized Odds | 0.09795 | 0.026676 | 0.025532 | 0.045989 | Figure 13: Enforcing Equalized odds using inferred characteristics. While the multi-threshold approach shows clear advantages on the validation set, this does not generalize to unseen test data. For unseen data, single threshold approaches show stronger degradation in accuracy, but their fairness constraints generalize better. This can be attributed to single threshold approaches selecting near constant classifiers when the constraints are strong, while the classifiers found by multi-threshold approaches are more vulnerable to sampling differences between validation and test. <!-- image --> <!-- image --> ## D.2.1 Equalized odds without using group membership at test-time As [57] requires access to groups at test time, we cannot directly compare with [57], we instead compare against FairLearn on the adult dataset, using sex as the protected characteristic. Results can be seen in Table 10, and Figure 13. Our approach differs from [102], as they enforce fairness with respect to inferred groups (without access to true group labels), while we directly use the inferred group labels to enforce fairness with respect to the underlying true groups. ## E Computer Vision Experiments Table 11: Hyperparameter details for the CelebA experiment. | Hyperparameter | Value/Range | |------------------|---------------| | Learning Rate | 0.0001 | | Batch Size | 32 | | Dropout Rate | 0.5 | | Backbone | Resnet-50 | | Weight Decay | 0 | | Optimizer | Adam [83] | | Epochs | 20 | Table 12: CelebA Attribute-level information from Ranaswamy et al. [28]. The columns are target attribute name, percentage of positive samples, skew. For example, Earrings has a skew of 0.97 towards g = -1 , that is, 97% of positive Earrings samples have gender expression label g = -1 ( Female ) | Attribute type | Attribute statistics | Attribute statistics | Attribute statistics | |------------------------|------------------------|------------------------|------------------------| | Inconsistently labeled | Positive | Skew | Skew | | BigLips | 24.1% | 0.73 | g = - 1 | | BigNose | 23.6% | 0.75 | g =1 | | OvalFace | 28.3% | 0.68 | g = - 1 | | PaleSkin | 4.3% | 0.76 | g = - 1 | | StraightHair | 20.9% | 0.52 | g = - 1 | | WavyHair | 31.9% | 0.81 | g = - 1 | | Gender-dependent | Positive | Skew | Skew | | ArchedBrows | 26.6% | 0.92 | g = - 1 | | Attractive | 51.4% | 0.77 | g = - 1 | | BushyBrows | 14.4% | 0.71 | g =1 | | PointyNose | 27.6% | 0.75 | g = - 1 | | RecedingHair | 8.0% | 0.62 | g =1 | | Young | 77.9% | 0.66 | g = - 1 | | Gender-independent | Positive | Skew | Skew | | Bangs | 15.2% | 0.77 | g = - 1 | | BlackHair | 23.9% | 0.52 | g =1 | | BlondHair | 14.9% | 0.94 | g = - 1 | | BrownHair | 20.3% | 0.69 | g = - 1 | | Chubby | 5.8% | 0.88 | g =1 | | EyeBags | 20.4% | 0.71 | g =1 | | Glasses | 6.5% | 0.80 | g =1 | | GrayHair | 4.2% | 0.86 | g =1 | | HighCheeks | 45.2% | 0.72 | g = - 1 | | MouthOpen | 48.2% | 0.63 | g = - 1 | | NarrowEyes | 11.6% | 0.56 | g = - 1 | | Smiling | 48.0% | 0.65 | g = - 1 | | Earrings | 18.7% | 0.97 | g = - 1 | | WearingHat | 4.9% | 0.70 | g =1 | | Average | 24.1% | 0.73 | | ## E.1 Methods We extensively used the codebase of Wang et. al [41] to conduct comparative experiments 11 . - · Empirical Risk Minimization (ERM) [103]: Acts as a baseline in our experiments where the goal is to minimize the average error across the dataset without explicitly considering the sensitive attributes. - · Adversarial Training with Uniform Confusion [74] : The goal is to learn an embedding that maximizes accuracy whilst minimizing any classifier's ability to recognize the protected class. The uniform confusion loss from Alvi et al. [74] is used following the implementation of [41]. - · Domain-Discriminative Training [41] : Domain information is explicitly encoded and then the correlation between domains and class labels is removed during inference. - · Domain-Independent Training [41] : Trains a different classifier for each attribute where the classifiers do not see examples from other domains. - · OxonFair + Multi-Head [21] : Described in Section 4.2. N -1 heads are trained to minimize the logistic loss over the target variables, where N is the total number of attributes. A separate head minimizes the squared loss over the protected attribute Male . Fairness is enforced on validation data with two separate optimization criteria. OxonFair-DEO calls fpredictor.fit(gm.accuracy, gm.equal\_opportunity, 0.01) to enforce Equal Opportunity. OxonFair-MGA calls fpredictor.fit(gm.min\_accuracy.min, gm.accuracy, 0) . ## E.1.1 Compute Details Computer vision experiments were conducted using a NVIDIA RTX 3500 Ada GPU with 12GB of RAM. Table 13: Comparing accuracy of fairness methods while varying minimum recall level thresholds, δ . | CelebA - 26 Attributes | δ = 0.50 | δ = 0.75 | δ = 0.85 | δ = 0.90 | δ = 0.95 | |--------------------------|------------|------------|------------|------------|------------| | Baseline (ERM) | 89 | 84.5 | 80.6 | 77.6 | 72.7 | | Adversarial | 87.8 | 82.4 | 78.2 | 75.2 | 69.3 | | Domain-Dependent | 82.3 | 76.8 | 72.4 | 68.6 | 62.2 | | Domain-Independent | 89.2 | 86.2 | 82.9 | 79.8 | 74.4 | | OxonFair | 89.9 | 87.3 | 84.4 | 81.8 | 76.9 | 11 https://github.com/princetonvisualai/DomainBiasMitigation Table 14: Extended Version of Table 2. Performance Comparison of Different Algorithmic Fairness Methods on the CelebA Test Set. Results monitor the mean Accuracy, Difference in Equal Opportunity (DEO) and the Minimum Group Minimum Label Accuracy across the attributes. | | ERM | Uniconf. Adv [74] | Domain Disc. [41] | Domain Ind. [41] | OxonFair DEO | OxonFair MGA | |------------------------------------|------------------------------------|------------------------------------|------------------------------------|------------------------------------|------------------------------------|------------------------------------| | Gender-Independent Attributes | Gender-Independent Attributes | Gender-Independent Attributes | Gender-Independent Attributes | Gender-Independent Attributes | Gender-Independent Attributes | Gender-Independent Attributes | | Acc. | 93.1 | 92.7 | 93.0 | 92.6 | 92.8 | 90.9 | | Min grp. min acc. | 64.1 | 72.3 | 76.5 | 71.2 | 72.3 | 85.8 | | DEO | 16.5 | 19.6 | 14.6 | 7.78 | 3.21 | 3.52 | | Gender-Dependent Attributes | Gender-Dependent Attributes | Gender-Dependent Attributes | Gender-Dependent Attributes | Gender-Dependent Attributes | Gender-Dependent Attributes | Gender-Dependent Attributes | | Acc. | 86.7 | 86.1 | 86.6 | 85.6 | 85.8 | 82.3 | | Min grp. min acc. | 43.4 | 53.7 | 59.6 | 53.8 | 52.5 | 78.5 | | DEO | 26.4 | 25.0 | 21.9 | 6.50 | 3.92 | 3.96 | | Inconsistently Labelled Attributes | Inconsistently Labelled Attributes | Inconsistently Labelled Attributes | Inconsistently Labelled Attributes | Inconsistently Labelled Attributes | Inconsistently Labelled Attributes | Inconsistently Labelled Attributes | | Acc. | 83.0 | 82.5 | 83.1 | 82.3 | 82.1 | 79.2 | | Min grp. min acc. | 36.1 | 43.0 | 50.2 | 42.7 | 44.3 | 69.5 | | DEO | 21.9 | 29.1 | 25.3 | 17.2 | 2.36 | 4.86 | Table 15: Performance comparison of Baseline, Adaptive g-SMOTE, g-SMOTE, OxonFair-DEO, and OxonFair-MGA on the training set. Reported are the means over the 32 labels selected by [17]. Methods marked * are reported from Zietlow et al. [17]. | 4 Protected Groups | | ERM | Adaptive g-SMOTE [17] | g-SMOTE* [17] | OxonFair-DEO | OxonFair-MGA | |----------------------|----------------|---------|-------------------------|-----------------|----------------|----------------| | Full Training Set | Acc. | 90 . 49 | 85 . 77 | 87 . 27 | 89 . 21 | 86 . 18 | | | Min. grp. acc. | 61 . 74 | 68 . 06 | 61 . 84 | 54 . 20 | 78 . 48 | | | DEO | 24 . 70 | 12 . 27 | 21 . 91 | 3 . 93 | 5 . 58 | Accuracy Figure 14: A comparison of the Pareto frontier on validation and test data when enforcing two fairness measures (DEO and Min Group Min Label Acc) for the Wearing Earrings attribute in CelebA whilst monitoring model accuracy. <!-- image --> | | Gender | Country | Ethnicity | Age | |------------|-----------------|-----------------|-----------------|-----------------| | English | 41200/7008/6927 | 44487/7744/7639 | 40731/6954/6845 | 39003/6628/6608 | | Polish | 11782/1461/1446 | 2218/489/471 | 8567/1199/1235 | 8610/1199/1235 | | Spanish | 2240/407/410 | 2299/436/439 | 2244/407/410 | 2249/407/410 | | Portuguese | 1408/150/163 | 1105/198/197 | 1377/150/163 | 1389/150/163 | | Italian | 2730/350/369 | 3769/514/516 | 2706/348/368 | 2676/349/368 | Table 16: Multilingual Twitter corpus train/val/test statistics. | | Gender=0 | Gender=1 | Age=0 | Age=1 | Coun- try=0 | Coun- try=1 | Ethnic- ity=0 | Ethnic- ity=1 | |------------|----------------------------------|----------------------------------|----------------------------------|----------------------------------|-----------------------|----------------------------------|------------------------|------------------------| | English | 5230/14461; 1096/3010; 1074/3009 | 6856/18776; 1447/3917; 1459/3999 | 4937/13279; 1060/2796; 1056/2753 | 6550/18199; 1357/3811; 1341/3874 | 4033/10764; 851/2218; | 7855/13931; 1693/2905; 1687/2996 | 5901/14297; 1236/2962; | 6036/18614; 1272/3883; | | | | | | | 846/2226 | | 1198/2990 | 1316/3963 | | Polish | 370/3552; | 18/3254; | 215/2401; | 18/3248; | 0/1127; | 5/629; | 219/2401; | 14/3248; | | Polish | 172/716; 194/787 | 13/730; 29/674 | 113/505; 103/526 | 14/730; 31/673 | 0/234; 0/253 | 2/151; 7/153 | 117/505; 113/526 | 10/730; 21/673 | | Spanish | 394/997; | 362/903; | 354/997; | 402/903; | 505/639; | 213/556; | 409/997; | 347/903; | | | 102/251; | 71/159; | 93/251; | 80/159; | 110/155; | 68/100; | 100/251; | 73/159; | | | 84/210 | 77/197 | 83/210 | 78/197 | 112/130 | 49/118 | 90/210 | 71/197 | | | 128/682; | 18/134; | 123/682; | 23/134; | 34/289; | 56/39; | 116/682; | 30/134; | | Portuguese | 24/60; | 55/103; | 29/60; | 50/103; | 24/40; | 50/68; | 48/60; | 31/103; | | Portuguese | 13/76 | 26/74 | 15/76 | 24/74 | 15/52 | 20/38 | 19/76 | 20/74 | | | 263/1127; | 119/541; | 209/1123; | 171/541; | 100/748; | 389/367; | 377/1121; | 3/541; | | Italian | 63/273; | 19/96; | 49/272; | 33/96; | 19/178; | 70/65; | 81/272; | 1/96; | | Italian | 63/244 | 23/106 | 42/243 | 44/106 | 19/169 | 83/71 | 86/242 | 0/106 | Table 17: Multilingual Twitter corpus breakdown. We report the count of positive and total samples across the train/val/test partitions for each ethnicity with specific values. We exclude samples where the label is marked as 'None' for a particular ethnicity. ## F NLP Experiments ## F.1 Experimental Details We employ a BERT-based model architecture [86], augmented with an additional head to simultaneously predict demographic factors (see Section 4.2. During training, we utilize the standard cross-entropy loss for the primary prediction task and a mean squared error loss for the demographic predictions, aggregating these to compute the overall loss. We ensure data consistency by excluding entries with missing demographic information. To facilitate easy comparison with different models, we select the Polish language for the multilingual Twitter corpus, noted for its high DEO score, to demonstrate how various models can reduce this score. We also conducted our experiment on the Jigsaw data. Unlike the multilingual Twitter corpus, the Jigsaw religion dataset contains three groups: Christian, Muslim, and others. The entire model, including the BERT backbone, is fine-tuned for 10 epochs using an initial learning rate of 2 × 10 -5 , following the original BERT training setup. All experiments are conducted on an NVIDIA A100 80GB GPU. ## F.2 Hate Speech Detection Task We follow the methodology outlined in [84] to conduct the hate speech detection task using our tool. Variables such as age and country in the multilingual Twitter corpus are binarized using the same method as described in [84]. The data splits for training, development, and testing are shown in Table 16. Multilingual Experiment. To demonstrate the capability of our proposed tool in handling multilingual scenarios, we conduct experiments across five languages: English, Polish, Spanish, Portuguese, and Italian and the results are shown in Table 18. Observations from the results indicate that: 1) Our model improves equal opportunity performance with minimal sacrifice to the main task performance. 2) The datasets in Polish and Portuguese show higher equal opportunity, indicating more severe bias compared to other languages, yet our proposed method effectively enhances performance in these conditions. | | original DEO | updated DEO | original Accuracy | updated Accuracy | |------------|----------------|---------------|---------------------|--------------------| | English | 5.13 | 3.19 | 84 | 84.2 | | Polish | 21.4 | 10.1 | 89.6 | 85.8 | | Spanish | 9.39 | 1.64 | 69.8 | 67.3 | | Portuguese | 17.3 | 1.29 | 60.7 | 52.1 | | Italian | 7.77 | 0.42 | 75.6 | 77.5 | Table 18: Multilingual Experiment. | | original DEO | updated DEO | original Accuracy | updated Accuracy | |-----------|----------------|---------------|---------------------|--------------------| | Gender | 21.4 | 8.45 | 89.6 | 88.5 | | Country | 10.2 | 8.32 | 81.4 | 82.2 | | Ethnicity | 8.56 | 4.92 | 83.1 | 82.7 | | Age | 12.5 | 6.02 | 82.1 | 80.5 | <!-- image --> Table 19: Demographic Experiments. Figure 15: Demographics frontier plot. Demographic Experiments. To demonstrate our tool's ability to address various demographic factors in text, we conducted experiments focusing on age, country, gender, and ethnicity, with results detailed in Table 19 and Figure 15. The outcomes reveal that our tool effectively improves equal opportunities across all demographic factors, underscoring its capability to handle general debiasing scenarios. | | Christian | Other | Muslim | |-------|-------------|----------|-----------| | Train | 22845/1892 | 3783/554 | 9527/2390 | | Valid | 5681/470 | 946/148 | 2425/578 | | Test | 2944/251 | 604/78 | 1119/319 | | | Black | Asian | |-------|-----------|----------| | Train | 6718/2811 | 2187/246 | | Valid | 1684/698 | 547/61 | | Test | 841/364 | 284/25 | Table 20: Jigsaw religion data. Table 21: Jigsaw race data. ## F.3 Toxicity Classification Task We also evaluate toxicity classification using the Jigsaw toxic comment dataset [85], which has been transformed into a Kaggle challenge. To demonstrate the ability of OxonFair to handle multiple protected groups, we consider religion as the protected attribute and evaluate performance across three groups: Christian, Muslim, and Other. Owing to the limitted dataset size, all samples labelled as a religion that was neither Christian nor Muslim were merged into Other and unlabeled samples were discarded. The statistics for this dataset are shown in Table 20, where each cell displays the count of negative and positive samples, respectively. The experimental results are discussed in the main paper. For the Jigsaw dataset, we follow the setup of [78], selecting race as the protected attribute. We focus on the subset of comments identified as Black or Asian, as these two groups exhibit the largest gap in the probability of being associated with toxic comments. The data statistics are shown in Table 21 where each cell displays the count of negative and positive samples, respectively. The experimental results, presented in Table 23, demonstrate that our proposed tool outperforms all other models. | | Religion=Muslim | Religion=Christian | Religion=Other | Race=Black | Race=Black | Race=Asian | |-------|-------------------|----------------------|------------------|--------------|--------------|--------------| | train | 2390/11917 | 1892/24737 | 554/4337 | train | 2811/9529 | 246/2433 | | test | 319/1438 | 251/3195 | 78/682 | test | 364/1205 | 25/309 | | valid | 578/3003 | 470/6151 | 148/1094 | valid | 698/2382 | 61/608 | Table 22: Jigsaw corpus breakdown. We report the count of positive and total samples across the train/val/test partitions for each religion and race with specific values. We exclude samples where the label is marked as 'None' for a particular ethnicity. | | F1 score | Balanced Accuracy | Accuracy | DEO | |------------------------------|------------|---------------------|------------|-------| | Base | 53.4 | 68.9 | 72.1 | 23.7 | | CDA [29] | 52.7 | 68.2 | 76.4 | 7.65 | | DP [23] | 47.4 | 64.6 | 72.6 | 4.35 | | EO [3] | 47.1 | 64.5 | 73.2 | 5.85 | | Dropout [88] | 52.4 | 68 | 72 | 12.7 | | Rebalance [11] | 51.7 | 67.5 | 74.4 | 5.57 | | OxonFair (Accuracy) | 37.5 | 60.8 | 77.7 | 2.1 | | OxonFair (F1) | 52.8 | 68.5 | 69.2 | 11.9 | | OxonFair (Balanced Accuracy) | 52.7 | 68.5 | 68.5 | 0.41 | | OxonFair (Accuracy) | 38.6 | 61.1 | 77.5 | 12.3 | | OxonFair (F1) | 53 | 68.7 | 69.4 | 16.4 | | OxonFair (Balanced Accuracy) | 53.2 | 68.9 | 67.8 | 20.5 | Table 23: Jigsaw dataset: Race (w groups: Black, Asian). ## G Comparison Table Information In this section, we provide further details on the information from Figure 1. While all approaches have many fairness definitions that can be computed, very few can be enforced via bias mitigation. As a minimum, OxonFair supports enforcing the methods from tables 5 and 6 (eliminating duplicates give the number 14 in the table). In addition to this, it supports a wide range of metrics that aren't used in the literature, for example minimizing the difference in balanced accuracy, F1 or Matthews correlation coefficient (MCC) between groups, e.g., by using balanced\_accuracy.diff as a constraint. It also supports the definitions set out in Appendix C, including minimax notions; absolute bias amplification; and enforcing for minimum rates per group in recall, or precision, or sensitivity actively promoting levelling-up [8]. ## G.1 FairLearn Methods Support Fairlearn provides an overview of the supported bias mitigation algorithms and supported fairness constraints in their documentation 12 . The number of performance and fairness objectives supported are dependent on the method. Methods supported include ExponentiatedGradient and GridSearch that provide a wrapper around the reductions approach to fair classification of Agarwal et al. [31]. Supported fairness definitions for classification are Demographic-Parity, Equalized Odds, True Positive Rate Parity, False Positive Rate Parity and Error Rate Parity. For postprocessing the ThresholdOptimizer approach of Hardt et al. [3] is supported. The adversarial approach of [32] is also supported and can enforce fairness based on Demographic Parity and Equalized Odds. The CorrelationRemover method provides preprocessing functionality to remove correlation between sensitive features and non-sensitive features through linear transformations. It should be emphasized that Fairlearn also provides an interface for defining custom Moments for fairness and objective optimization, however, as of the current version 0.10 no documentation or examples are provided for doing so. ## G.2 AIF360 Methods Support AIF360 provides support for a wide variety of methods 1314 that enforce fairness, many of which overlap with Fairlearn. We consider group fairness approaches. Preprocessing algorithms include DispirateImpactRemover [11], LFR [104], Optimized Preprocessing [26], Reweighting [25] and FairAdapt [105]. Inprocessing algorithms include AdversarialDebiasing [32], PrejudiceRemover [106], Exponentiated GradientReduction and GridSearchReduction [31]. Postprocessing approaches include CalibratedEqOddsPostprocessing [44], EqOddsPostprocessing [3], RejectOptionClassification [25]. ## G.3 Societal Impacts We reiterate the findings of Balayn et al., who note that fairness toolkits can act as a double-edged sword [92]. Open source toolkits can enable wider adoption of the assessment and mitigation of bias and fairness related harms. However, if misused, these toolkits can create a flawed certification of algorithmic fairness, endangering false confidence in flawed methodologies [55, 107]. We join growing calls in encouraging practitioners to be reflective in their use of fairness toolkits [60]. Specifically, we urge practitioners to adopt a harms first approach to fairness and be reflective in their measurement and enforcement of fairness. 12 https://FairLearn.org/main/user\_guide/mitigation/index.html 13 https://aif360.readthedocs.io/en/stable/modules/algorithms.html 14 https://aif360.readthedocs.io/en/stable/modules/sklearn.html ## NeurIPS Paper Checklist ## 1. Claims Question: Do the main claims made in the abstract and introduction accurately reflect the paper's contributions and scope? Answer: [Yes] Justification: The main claims in the abstract and introduction provide details on the paper's contribution and scope. ## 2. Limitations Question: Does the paper discuss the limitations of the work performed by the authors? Answer: [Yes] Justification: Limitations are reported in the conclusions section. ## 3. Theory Assumptions and Proofs Question: For each theoretical result, does the paper provide the full set of assumptions and a complete (and correct) proof? Answer: [NA] ## 4. Experimental Result Reproducibility Question: Does the paper fully disclose all the information needed to reproduce the main experimental results of the paper to the extent that it affects the main claims and/or conclusions of the paper (regardless of whether the code and data are provided or not)? Answer: [Yes] Justification: We provide full descriptions of the information needed to reproduce the main claims and conclusions throughout the main body of the paper, the appendix and the substantial codebase for our toolkit. ## 5. Open access to data and code Question: Does the paper provide open access to the data and code, with sufficient instructions to faithfully reproduce the main experimental results, as described in supplemental material? Answer: [Yes] Justification: We provide referenced details on how to access data and benchmarks used in our experimental demonstration. We provide code examples as we simply train PyTorch implementations of popular neural networks. We also provide all code for our toolkit including example notebooks. ## 6. Experimental Setting/Details Question: Does the paper specify all the training and test details (e.g., data splits, hyperparameters, how they were chosen, type of optimizer, etc.) necessary to understand the results? Answer: [Yes] Justification: Experimental settings are detailed in the main paper, appendix and also in the codebase. We use well known datasets that often have recommended splits. ## 7. Experiment Statistical Significance Question: Does the paper report error bars suitably and correctly defined or other appropriate information about the statistical significance of the experiments? Answer: [No] Justification: The main contribution of this paper is a new library that offers increased functionality to data scientists and practitoners and not a single new method. Our experimental demonstration highlights the flexibility of OxonFair across domains and data modalaties (Computer Vision, NLP, Tabular Data) ## 8. Experiments Compute Resources Question: For each experiment, does the paper provide sufficient information on the computer resources (type of compute workers, memory, time of execution) needed to reproduce the experiments? Answer: [No] Justification: We include details about computer resources for each experimental section. We report time of execution when comparing against Fairlearn for multiple groups. ## 9. Code Of Ethics Question: Does the research conducted in the paper conform, in every respect, with the NeurIPS Code of Ethics https://neurips.cc/public/EthicsGuidelines ? Answer: [Yes] ## 10. Broader Impacts Question: Does the paper discuss both potential positive societal impacts and negative societal impacts of the work performed? Answer: [Yes] Justification: This paper has a section in the conclusion that discusses the potential positive and negative societal impacts of this work. This is also expanded in the appendix ## 11. Safeguards Question: Does the paper describe safeguards that have been put in place for responsible release of data or models that have a high risk for misuse (e.g., pretrained language models, image generators, or scraped datasets)? Answer: [NA] ## 12. New Assets Question: Are new assets introduced in the paper well documented and is the documentation provided alongside the assets? Answer: [Yes] Justification: A new library is presented which is supported by extensive documentation and example notebooks for practitioners. An Apache 2.0 licence is documented. ## 13. Crowdsourcing and Research with Human Subjects Question: For crowdsourcing experiments and research with human subjects, does the paper include the full text of instructions given to participants and screenshots, if applicable, as well as details about compensation (if any)? Answer: [NA] ## 14. Institutional Review Board (IRB) Approvals or Equivalent for Research with Human Subjects Question: Does the paper describe potential risks incurred by study participants, whether such risks were disclosed to the subjects, and whether Institutional Review Board (IRB) approvals (or an equivalent approval/review based on the requirements of your country or institution) were obtained? Answer: [NA]
zsXbGJJ7Oo
G2D: From Global to Dense Radiography Representation Learning via Vision-Language Pre-training
Medical imaging tasks require an understanding of subtle and localized visual features due to the inherently detailed and area-specific nature of pathological patterns, which are crucial for clinical diagnosis. Although recent advances in medical vision-language pre-training (VLP) enable models to learn clinically relevant visual features by leveraging both medical images and their associated radiology reports, current medical VLP methods primarily focus on aligning images with entire reports. This focus hinders the learning of dense (pixel-level) visual features and is suboptimal for dense prediction tasks (e.g., medical image segmentation). To address this challenge, we propose a novel medical VLP framework, named **Global to Dense level representation learning (G2D)**, which aims to learn global and dense visual features simultaneously using only image-text pairs without extra annotations. In particular, G2D designs a **Pseudo Segmentation (PS)** task, which enables the model to learn dense visual features during VLP. Notably, generating PS masks can be performed on the fly during VLP, which does not incur extra trainable parameters. With this simple yet effective idea, G2D achieves superior performance across 5 medical imaging tasks and 25 diseases. Particularly, in the segmentation task which requires dense visual features, **G2D surpasses existing models even with just 1% of the training data for finetuning, compared to 100% used by other models**. The code can be found in https://github.com/cheliu-computation/G2D-NeurIPS24/tree/main.
https://openreview.net/pdf/266314e449f23eb30c332e9f0688da33556f643c.pdf
[ { "confidence": 5, "rating": 5, "review_id": "wPn9WWqSQg", "review_text": "This paper proposes G2D, a novel vision-language pre-training (VLP) framework for medical imaging that aims to learn both global and dense visual representations from radiography images and their associated radiology reports. The key innovation is a pretext task called Pseudo Segmentation (PS), which uses a pseudo mask derived from attention maps to guide the learning of dense visual features during pre-training. The authors demonstrate that G2D outperforms existing medical VLP approaches on various downstream tasks including classification, segmentation, object detection, and zero-shot visual grounding across multiple medical imaging datasets. Notably, G2D shows strong performance on segmentation tasks even when fine-tuned on very limited data.\n\nNovel approach: The paper introduces an innovative method for learning dense visual representations in medical VLP without requiring pixel-level annotations, addressing a key limitation of existing approaches.\n\nWell-motivated: The authors provide a clear rationale for why learning dense representations is important for medical imaging tasks and why existing VLP methods struggle with this.\n\nComprehensive evaluation: The method is evaluated on a wide range of downstream tasks and datasets, demonstrating its versatility and effectiveness across different medical imaging applications.\n\nStrong results: G2D consistently outperforms existing methods, especially on segmentation tasks where it achieves impressive results with very limited fine-tuning data.\n\nAblation studies: The paper includes thorough ablation experiments to validate key design choices and components of the method.\n\nPotential impact: The proposed approach could significantly reduce the need for large annotated datasets in medical imaging, which is a major bottleneck in the field.\n\nLimited theoretical analysis: While the method is empirically strong, there is little theoretical justification for why the pseudo segmentation task leads to improved dense representations.\n\nComplexity of the approach: The method involves several components and processing steps, which may make it challenging to implement and potentially limit its adoption.\n\nComputational resources: The pre-training process appears to be computationally intensive (16 A100 GPUs), which could be a barrier for researchers with limited resources.\n\nGeneralization to other domains: While the focus on medical imaging is valuable, it's unclear how well this approach would generalize to other vision-language domains.\n\nComparison to more recent baselines: Some of the baselines used for comparison (e.g., ConVIRT, GLoRIA) are somewhat older.\n\nComparison to more recent medical VLP methods would strengthen the evaluation.\n\nMajor concerns:\nMy primary concern revolves around the authors' claim that current medical VLP methods primarily align images with entire text reports. This assertion appears to be inconsistent with the facts, as evidenced by several papers that have employed local alignment between image regions and text. This factual contradiction significantly undermines the novelty of the present work. For instance:\n\nGLoRIA (Huang et al., ICCV 2021): \"Global-Local Representation Alignment for Improved Visual Recognition in Medical Imaging\"\nThis paper introduced a global-local alignment approach, learning finer-grained representations by aligning image patches with text tokens.\nMGCA (Wang et al., arXiv 2022): \"Multi-Granularity Cross-Modal Alignment for Generalized Medical Visual Representation Learning\"\nThis method employed a multi-granularity alignment strategy, including global, local, and fine-grained levels of alignment.\nBioViL (Boecking et al., ECCV 2022): \"Making the Most of Text Semantics to Improve Biomedical Vision–Language Processing\"\nThis work proposed a method to improve biomedical vision-language processing by leveraging text semantics, which includes local alignment strategies.\nMedKLIP (Wu et al., medRxiv 2023): \"Medical Knowledge Enhanced Language-Image Pre-training\"\nThis approach utilized external knowledge bases to enhance local alignment, achieving more fine-grained image-text matching.\nGiven these existing works, the authors' characterization of the current state of medical VLP appears inaccurate. This misrepresentation significantly weakens the claimed novelty of their approach. The authors should provide a more accurate description of existing methods and clearly articulate how their approach differs from or improves upon these established local alignment strategies.\n\nOther minor concerns:\nHave you explored the quality of the learned representations at different levels of the network? Are there significant differences in the quality of features at different scales?\nHow sensitive is the method to the choice of threshold used in pseudo mask construction? The ablation shows results for a few values, but is there a principled way to choose this threshold?\nHave you investigated the potential of using the pseudo masks generated during pre-training for weakly supervised segmentation tasks?\nHow does the performance of G2D change as the amount of pre-training data is varied? Is there a clear relationship between pre-training data volume and downstream task performance?\nGiven the computational requirements for pre-training, have you explored any techniques for making the approach more efficient, such as progressive training or curriculum learning?" }, { "confidence": 4, "rating": 5, "review_id": "QpQD7IdrMu", "review_text": "This manuscript describes a medical vision-language pre-training framework called Global to Dense level representation learning (G2D), that learns global and dense visual features simultaneously with only image-text pairs, by exploiting the aggregated attention map from the vision encoder for a pseudo segmentation pretext task. The improved (frozen) vision encoder is then utilized as part of the model pipeline for a number of downstream tasks (e.g. segmentation, classification)\n\n- Pseudo segmentation pretext task enables dense segmentation during pre-training, and avoids external resources as for alignment-based methods, and limitations on high-level semantic representations in reconstruction-based methods\n - Importance of associating semantic meaning verified via experiment\n\n- Unclear if specific sentence/phrase to individual image region alignment is achieved, for dense learning\n - Lack of fine-grained pixel-level evaluation of masks\n\n1. The accuracy of the initial aggregated attention map appears possibly non-optimal, given that additional thresholding by body mask is required. As such, it might be considered to quantify the accuracy of these maps, possibly against segmentation ground truth.\n\n2. In Section 3.2, it is stated that a threshold is applied (at 85%) to transform the aggregated attention map into a binary mask, before smoothing. It might be clarified if the need for smoothing (and related smoothing parameters) was empirically determined.\n\n3. In Section 3.3, it is stated that \"This decoder takes visual feature V_i as input and utilises the pseudo mask ˜M_i as the supervisory signal for the pretext task\". It might be clarified as to whether and how specific text can be matched to specific (separate) image regions, as in Figure 4 of Section A.7. In other words, while Figure 4 shows specific text descriptions corresponding to specific image regions, were these correspondences/alignments indicated by the proposed G2D model, or are they external manual observations? A.1 suggests no, but this might be explicitly stated.\n\n4. In Section 4, the choice of ResNet-50 as the encoder over other plausible choices (e.g. U-Net encoder) might be briefly explained.\n\n5. For Table 1, it might be clarified as to what \"encoder-decoder\" refers to - the updating of both encoder and decoder?" }, { "confidence": 4, "rating": 6, "review_id": "jyhquw0TsQ", "review_text": "The paper proposes an encoder-decoder medical VLP approach for global-to-dense visual representation learning. Pseudo segmentation is adopted for dense level learning. Rich experiments validate the effectiveness of the proposed method.\n\n1. The motivation behind the work is clear. Pseudo-segmentation supervision is effective, which is validated by experiments.\n2. The experiments are rich and ablation analysis shows the contributions of each component and design.\n3. The illustrations are clear and easy to understand.\n4. The improvements are consistent and sometimes substantial.\n\n1. The comparisons with MGCA and MRM in the CXR14 dataset are not included in Table 3, but Table 4 includes the comparisons with MGCA and MRM. What are the reasons behind this?\n2. Transformer-based vision encoder is not analyzed.\n3. The balance between VLA and PA losses is not analyzed.\n\nIs it not applicable to compare with MGCA and MRM in the CXR14 dataset?" }, { "confidence": 4, "rating": 6, "review_id": "vgnjuXg82b", "review_text": "The paper proposes a new medical vision-language model, G2D, which employs vision-language alignment (VLA) and pixel alignment (PA) strategies, combined with a pseudo segmentation (PS) pre-training task, to learn global and dense visual representations from medical images. The VLA strategy is used to learn global representations of images and texts, while the PS task constructs pseudo masks through a parameter-free mechanism to facilitate the learning of dense representations. The method is comprehensively validated across five downstream tasks (image segmentation, object detection, zero-shot image visual grounding, zero-shot image classification, and fine-tuned image classification), demonstrating its effectiveness in handling both unimodal and cross-modal tasks.\n\n+ The paper is well-written, with the motivation, method, and results clearly presented. A minor concern is the reference format; it should be [1] instead of (1) according to the NeurIPS template.\n\n+ A significant concern with most existing works is that they operate primarily at the Image-Text Retrieval level, similar to the perceptual level of CLIP, and do not effectively capture dense features between modalities. The G2D model addresses this issue by integrating Vision-Language Alignment (VLA) and Pseudo Segmentation (PS) tasks to facilitate simultaneous learning of global and dense visual features. This multi-level feature learning significantly enhances the model's performance in tasks requiring dense feature perception, such as segmentation.\n\n+ During pre-training, the G2D method utilizes only image-text pairs without the need for additional annotated data. By generating pseudo masks on the fly through the PS task, it reduces the cost and complexity associated with data annotation.\n\n+ The G2D method is novel, and the experiments are robust. Experimental results on five medical imaging tasks involving 25 diseases demonstrate that the G2D model outperforms existing models, even with minimal fine-tuning data. Notably, in segmentation tasks requiring dense visual features, G2D achieves excellent results with just 1% of the training data for fine-tuning.\n\nMajor concerns:\n\n- The attention maps could introduce errors in pseudo mask, and these errors may propagate throughout the training process. To address this, a clear validation strategy needs to be outlined. For instance, in Figure 2, aggregated attention map might incorrectly highlight irrelevant regions. It is essential to establish methods for **detecting** and **measuring** these errors to ensure the reliability of the model. I hope the authors could quantify the errors in aggregated attention map and pseudo mask during the rebuttal period.\n\nMinor concerns:\n\n- The training and validation of the model rely on specific datasets, which may introduce biases and potentially affect the model's generalizability to different datasets.\n\n- It is uncertain whether the method can be effectively extended to vision-language tasks involving 3D imaging (e.g., CT and MRI), presenting a limitation in its current scope of application.\n\n- How do you detect and correct the errors made by aggregated attention map?" } ]
## G2D: From Global to Dense Radiography Representation Learning via Vision-Language Pre-training - 6 Che Liu 1,2 , Cheng Ouyang 3,8,9 Sibo Cheng 10 Anand Shah 6,7 Wenjia Ba i 2,3,4 Rossella Arcucci 1,2 1 Department of Earth Science and Engineering, Imperial College London, UK 2 Data Science Institute, Imperial College London, UK 3 Department of Computing, Imperial College London, UK 4 Department of Brain Sciences, Imperial College London, UK Department of Infectious Disease Epidemiology, Imperial College London, UK 7 Royal Brompton and Harefield Hospitals, UK 8 Department of Engineering Science, University of Oxford, Oxford, UK 9 Institute of Clinical Sciences, Imperial College London, UK 10 CEREA, École des Ponts and EDF R&amp;D, Île-de-France, France. [email protected] ## Abstract Medical imaging tasks require an understanding of subtle and localized visual features due to the inherently detailed and area-specific nature of pathological patterns, which are crucial for clinical diagnosis. Although recent advances in medical vision-language pre-training (VLP) enable models to learn clinically relevant visual features by leveraging both medical images and their associated radiology reports, current medical VLP methods primarily focus on aligning images with entire reports. This focus hinders the learning of dense (pixel-level) visual features and is suboptimal for dense prediction tasks (e.g., medical image segmentation). To address this challenge, we propose a novel medical VLP framework, named G lobal to D ense level representation learning ( G2D ), which aims to learn global and dense visual features simultaneously using only image-text pairs without extra annotations. In particular, G2D designs a P seudo S egmentation ( PS ) task, which enables the model to learn dense visual features during VLP. Notably, generating PS masks can be performed on the fly during VLP, which does not incur extra trainable parameters. With this simple yet effective idea, G2D achieves superior performance across 5 medical imaging tasks and 25 diseases. Particularly, in the segmentation task which requires dense visual features, G2D surpasses existing models even with just 1% of the training data for finetuning, compared to 100% used by other models. The code can be found in https://github.com/cheliucomputation/G2D-NeurIPS24/tree/main. ## 1 Introduction In medical image analysis, learning global and dense visual representations typically requires laborintensive and costly image and pixel-level annotations [1, 2]. Vision-language pre-training (VLP) attempts addressing this by aligning vision and language content using paired datasets [3, 4, 5, 6]. Although existing medical VLP methods excel at learning global visual features [7], they face challenges with dense visual features because the level of detail in text reports does not offer sufficient Figure 1: Comparing existing medical VLP methods with G2D: a) Alignment-based approaches lack dense (pixel-level) feature learning. b) Reconstruction-based approaches do not align with text, resulting in a deficiency in discriminative and clinically relevant visual features. c) The framework of G2D (proposed) learns dense, clinically relevant, text-aligned visual features through derived pseudo masks and image-text alignment. We use red text to highlight the deficiencies of existing methods and blue text to emphasize our advantages. <!-- image --> pixel-level supervision for learning these more detailed aspects. Existing medical VLP methods are categorized into two main types, as shown in Fig. 1: - · Alignment-based Approaches , which focus on aligning images with reports [4, 8, 9, 5, 6, 2, 10]. Although methods like [4, 8, 9] align images with entire reports and text tokens, they struggle to learn dense, clinically relevant visual features. This is due to the ambiguous supervision targets provided by text tokens, which lack explicit relational pairing with image regions, as discussed in [2]. - · Reconstruction-based Approaches , which learn representations by reconstructing masked images or reports using masked modeling techniques [11, 12]. However, they also lack success in capturing dense, clinically relevant visual features, as the reconstruction task primarily focuses on low-level patterns (texture, shape) rather than high-level semantics [13]. Despite advancements in medical VLP, limitations still exist. Current alignment approaches align image patches with text tokens in a brute-force manner and possibly cause misalignments when some word tokens ( e.g., 'compatible' or 'unremarkable') lack direct visual counterparts, leading to ambiguous local alignments. Meanwhile, reconstruction-based approaches may ignore high-level image semantics. They are designed to recover low-level visual information such as intensity and texture, without accounting for high-level semantics [14, 13, 15]. As a result, both approaches perform suboptimally for downstream tasks, such as semantic segmentation and visual grounding, which require learning of granular visual features that are aligned with high-level semantics. While numerous VLP methods are designed to capture dense visual features for natural image datasets (e.g., ImageNet) they often struggle to transfer directly to medical images because they depend on a well-trained object detection model [16, 17] or a well-aligned VLP model [18, 19]. In the medical domain, obtaining such pre-trained models is difficult as objects can be defined in various ways within a single medical image (e.g., based on organs, anatomical structures, or abnormal regions) . Additionally, in medical domain, there is a lack of foundational VLP models that are both publicly accessible and are trained on sufficiently large image-text pairs that cover diverse medical imaging applications. In response to the aforementioned challenges, we introduce a novel medical VLP approach termed G2D. This approach is designed to extract global and dense visual representations from radiography along with their associated radiology reports, with improved feature granularity and enriched semantic information . Central to our approach is a pretext task, P seudo S egmentation (PS), which is guided by a pseudo mask (segmentation target) derived from a carefully refined and filtered attention map. PS encourages the model to learn dense representations through a pixel-level pretext task that incorporates high-level semantics. This approach, in contrast to traditional methods that align image patches with text tokens, inherently mitigates the misalignment bias and allows learning of more representative features. Notably, the PS pretext task can be implemented to run concurrently with vision-language alignment, ensuring that the model can be trained end-to-end, contrasting with the two-stage training methods [18]. To evaluate the effectiveness of G2D relative to other state-of-the-art (SOTA) VLP approaches, we deploy the pre-trained model across a diverse range of downstream tasks, including medical image classification, semantic segmentation, object detection, as well as zero-shot image classification and visual grounding, on six public large-scale CXR datasets. The experimental results demonstrate the superior performance of G2D over existing VLP approaches on these medical applications. Overall, our contribution is three-fold: - 1. We introduce G2D, the first end-to-end encoder-decoder medical VLP approach designed to learn visual representations from the global level down to the dense level, supervised by paired radiology reports and a pixel-wise pretext task. - 2. Wecarefully design a pretext task tailored for medical VLP, pseudo segmentation. It formulates a pseudo mask as segmentation target, allowing the model to learn dense visual representations in the pretext task which can benefit downstream dense visual tasks in medicine. The pseudo mask can be generated using a parameter-free processor that leverages the attention map derived from the visual representation associated with radiology reports. - 3. We conduct comprehensive experiments to validate the efficacy of the proposed G2D approach, which outperforms peer approaches across five uni-modal and cross-modal downstream tasks. ## 2 Related Works Alignment-based Medical VLP. Drawing inspiration from [3], aligning images with their corresponding textual descriptions in the latent space has led to notable advancements in VLP. Within the CXR domain, while ConVIRT [4] made an early attempt at employing bidirectional contrastive learning to globally align entire images with their paired reports, there remained room for refinement. GLoRIA [8] and MGCA [9] represent advancements in image-report alignment, introducing sophisticated global-local methodologies to the field [8, 9]. These approaches endeavor to establish correspondences between distinct image and text tokens. However, it is crucial to recognize that the granularity of token-level alignment could inadvertently introduce distortions to the medical context, potentially leading to misalignments, as illustrated by [20, 2]. Med-UniC [20] utilizes augmented text in VLP training to cultivate language invariance, with the goal of mitigating linguistic biases from VLP. Meanwhile, MedKLIP [5] and KAD [21] harness domain-specific knowledge from external annotated datasets to enhance textual information extraction. Notably, these approaches [20, 5, 21] are contingent upon external resources or extra data to optimize cross-modal representation learning, which could potentially constrain their generalizability. Reconstruction-based Medical VLP. Several studies, including [12, 11, 22], have employed reconstruction of image and text tokens as a pretext task within VLP. Specifically, MRM [12] endeavors to reconstruct the original image from a masked version and simultaneously aims to regenerate the original text using both the masked image and text as inputs. Conversely, PRIOR [11] adopts a strategy that focuses on cross-modal representation by reconstructing images and sentences based on complete image and report inputs. An enhancement to the MRM [12] approach is proposed by [22], where token weights are adjusted during the reconstruction phase. While these methods have demonstrated promising outcomes, the ability of the reconstruction pretext task to capture high-level semantic representations is limited, as shown in [14, 15, 13], and is further challenged by the absence of explicit semantic-related constraints in dense visual representation learning. ## 3 Methodology The central aim of G2D is to learn global and dense visual representations from medical images under the supervision of their corresponding radiology reports. As illustrated in Fig 2 Left, G2D integrates two alignment strategies: vision-language alignment (VLA) that learns global representations, and pixel alignment (PA) that focuses on granular representation via a pixel-level pretext task, P seudo S egmentation ( PS ). The pseudo mask for PS is constructed through a parameter-free mechanism, which is operated alongside VLA. The PS pretext task enables G2D to derive dense representations at both encoder and decoder levels during pre-training. Moreover, the task head of the pretext task facilitates a smoother transfer for the pre-trained encoder to be applied to downstream segmentation tasks, reducing the gap between the dense visual representation learned from VLP and the needs of downstream dense visual tasks after VLP. This contrasts with previous meth- Figure 2: Left: Framework of G2D. Right: Pipeline for pseudo mask construction. We visualize the constructed pseudo mask and corresponding sentence in the radiology report in Sec A.7. <!-- image --> ods [4, 8, 9, 21, 5, 6] that typically transfer only the pre-trained encoder, potentially leading to an information gap between the pre-training and downstream tasks. ## 3.1 Vision-Language Contrastive Learning We utilise a dual-encoder image-text contrastive approach following [4, 8, 9, 5]. Given a training set S consisting of N pairs of image-text ( v , l i i ) , where v i ∈ V denotes an image and l i ∈ L denotes a text report, i = 1 2 3 , , , ..., N , G2D employs an image encoder F e : V ↦→ R D v to encode the image into an embedding of dimension D v , and a text encoder F l : L ↦→ R D l to encode the text report into an embedding of dimension D l . The embedded image and text features can be denoted as S = ( { v 1 , l 1 ) , ( v 2 , l 2 ) , . . . , ( v N , l N ) } , where v i = F e ( v i ) and l i = F l ( l i ) . As depicted in Fig. 2, G2D incorporates two alignment strategies: VLA and PA. For VLA, the model aims to learn global visual and text representations by pulling the embeddings of paired image-report samples closer, while distancing embeddings of unpaired samples, using a contrastive loss L VLA . The objective of contrastive learning is to predict N positive matched pairs ( v , l i i ) and N 2 -N negative pairs among N × N possible image-text pair combinations [3]. Subsequently, two non-linear vision and language projectors P v and P l transform v i and l i into the same dimension d , where ˆ v i = P v ( v i ) , ˆ l i = P l ( l i ) , and ˆ v i , ˆ l i ∈ R d . After obtaining image feature vectors [ ˆ v i ] N i =1 and text feature vectors [ ˆ l i ] N i =1 with the same dimension d , the contrastive loss L VLA can be formulated as: <!-- formula-not-decoded --> σ denotes the temperature hyper-parameter empirically set to 0.07 following [9], and K ∈ N is the batch size. ## 3.2 Pseudo Segmentation Mask Construction Notably, although MedSAM [23] claims to build image-mask pairs, it requires box prompt inputs not available in the MIMIC-CXR [24] dataset. Designing a box prompt for each image is labor-intensive and unfeasible for this work, so we construct the pseudo mask based on attention maps. Attention Aggregation. Inspired by CLIP [3], we incorporate an attention pooling mechanism in conjunction with the non-linear projector P v to derive a pixel-wise attention map. A dense feature map V i is extracted from the final convolutional layer before the pooling operation in the image encoder F e , with the dimension C × H × W . Here, C denotes the number of channels, while H and W represent the height and width of the feature maps. Subsequently, we reshape V i into a dimension of HW × C . In this way, V i can be interpreted as a sequence of pixel embeddings, where each token in this sequence represents the embedding of an individual pixel. The length of this sequence is defined by the number of channels, C . A special token, [CLS] , is introduced to aggregate all pixel embeddings through multi-head self-attention (MHSA) [25, 3]. This process offers an attention score matrix W h i for each pixel, with dimensions h × H × W . Here, h signifies the attention head number, and h ∈ H , with H being the total number of attention heads. This attention score matrix characterizes the information exchange between pixels and semantics provided by the text [3, 18], and therefore it carries semantic information and is an ideal candidate for constructing the pretext pseudo mask. To derive the pseudo mask, we aggregate W h i across all attention heads to produce ˆ W i , as described by: <!-- formula-not-decoded --> Mask Filtering and Edge Smoothing. After obtaining the aggregated attention map ˆ W i , we upsample it to match the original image dimensions H ′ × W ′ . To remove pseudo mask regions in the background, we construct a body mask for each CXR image using a histogram-based thresholding approach, following common practice [26, 27]. Subsequently, all attention scores outside the body mask are set to zero. A threshold is applied to filter out low attention scores within the body mask, transforming ˆ W i into a binary mask. The threshold is determined at the 85% percentile of attention scores from ˆ W i . The binary pseudo mask M i is formulated as: <!-- formula-not-decoded --> <!-- formula-not-decoded --> To smooth the square-like boundary in the mask caused by upsampling, we apply bilateral filtering (BF) [28] to M i , resulting in a refined pseudo mask ˜ M i , as shown in Fig. 2 Right. A comprehensive ablation study discussing the threshold and smoothing operation is presented in Sec. 4.5. ## 3.3 Dense Visual Representation Learning through Pseudo Segmentation in VLP While the global visual representation can be learned via VLA, dense representation often lacks direct alignment. To tackle this limitation, we introduce an image decoder, denoted as F d , as shown in Fig. 2 Left. This decoder takes visual feature V i as input and utilises the pseudo mask ˜ M i as the supervisory signal for the pretext task. We employ the commonly used soft Dice loss and binary cross-entropy loss [27] to optimise this task. The training loss function for L PA is formulated as: <!-- formula-not-decoded --> The total loss for G2D is the sum of the VLA loss (Eq. 1) and the PA loss (Eq. 4): <!-- formula-not-decoded --> It is worth noting that the pseudo mask is designed as a pixel-wise pretext supervisory signal. Although there is no manual annotation involved, the pseudo mask is constructed from the visual feature of the image encoder, which is pre-trained to align with radiology reports and thus contains clinical knowledge such as anatomical regions mentioned by the reports. In this sense, it can be a good surrogate target for learning pixel-wise semantic information. To demonstrate that the pseudo mask serves as a meaningful target for dense visual pre-training, we conduct an ablation study to use a perturbed pseudo mask with corrupt semantics for pre-training, and compare it to the proposed pseudo mask, as detailed in Table 8 and Sec A.6. ## 4 Experiments and Analysis In this section, we compare our approach with SOTA medical VLP techniques. The implementation details and dataset training/test splits are reported in Sec A.3, A.4. Pretraining Dataset and Configuration Weutilise the MIMIC-CXR dataset [29, 24]. After preprocessing based on established protocols [9, 5], it provides 213,384 image-text pairs for pre-training. For the VLP part, we employ a standard ResNet-50 as the vision encoder F e and adopt the decoder part of a U-Net as the vision decoder F d . We adopt ClinicalBERT [30] as the text encoder using configurations described in [5, 21]. In line with [9, 8], G2D is pre-trained for 50 epochs across 16 A100 GPUs, each accommodating a batch size of 128. The AdamW optimizer is employed with a learning rate set to 2 × 10 -4 and a weight decay of 1 × 10 -8 . Additionally, a linear warm-up and a cosine annealing scheduler are incorporated in the training process. ## 4.1 Downstream Task Datasets and Configurations For downstream tasks, our focus is to evaluate the efficacy of G2D in learning granular visual features that can be used for localisation, vision-language understanding, and visual recognition tasks. We examine the capability and transferability of the learned cross-modal representations by using them for five distinct medical imaging tasks, covering a spectrum of 25 different diseases. Medical Image Segmentation. This task utilises the RSNA [31] and SIIM [32] datasets, following preprocessing guidelines established in [9, 8]. We adopt U-Net [1] fine-tuning configurations following [8, 9]. The pre-trained vision encoder is frozen, while only the decoder parameters are updated during fine-tuning. Performance is assessed using the Dice score, following the evaluation protocol in [8, 9]. It is noteworthy that the original MedKLIP [5] uses a different configuration ( updating the vision encoder ) compared to other methods ( freezing the vision encoder ) [4, 8, 9, 6]. Therefore, in these experiments, we reference the results reported in [20], which reimplemented MedKLIP under a setting consistent with all other methods. For a fair comparison specifically with MedKLIP, we also reimplement G2D under MedKLIP's original setting, as reported in the Sec A.5. Medical Object Detection. This task is conducted using the RSNA dataset [31] for Pneumonia Detection and the Object-CXR dataset [33] for Foreign Objects Detection, adhering to preprocessing methods from [9]. We employ YOLOv3 [34] for detection, using the pre-trained vision encoder and updating an additional detection head during fine-tuning. We report the mean Average Precision (mAP) with IoU thresholds between 0.4 ∼ 0.75. The setup for this task is in accordance with in [9]. Zero-shot Medical Image Visual Grounding. In accordance with [5], this task is conducted on the RSNA [31] and SIIM [32] datasets, using the same official data split and evaluation metrics. We employ CXR images as input and utilise the corresponding ground truth label maps for assessing the grounding performance, in terms of recall, IoU, and Dice score. Zero-shot Medical Image Classification. In compliance with the guidelines set forth in [5, 21], we conduct this task on the RSNA [31], SIIM [32], CheXpert [35], and CXR14 [36] datasets. For the RSNA and SIIM datasets, we employ the test set splits provided by MedKLIP [5], given that KAD[21] did not conduct experiments on these two datasets. For the CheXpert and CXR14 datasets [35, 36], we use the official test set splits to ensure a fair comparison with KAD [21]. It is important to note that MedKLIP [5] creates its own test split rather than using the official test split. Hence, we do not use MedKLIP's splits in our experiments. We report the results using the macro average of AUC, F1, and ACC scores across all diseases. Medical Image Fine-tuned Classification. In alignment with [5, 21], we use the CXR14 dataset [36], comprising 112,120 frontal-view X-rays from 30,805 patients, annotated for 14 diseases. We adhere to the official split for consistent evaluation, following KAD [21]. It is worth noting that MedKLIP does not use the official data split. Hence, we refer to the results reported in KAD [21] rather than those from the original MedKLIP [5]. To ensure a fair comparison with MedKLIP, we reimplemented G2D for this experiment under the MedKLIP configuration, as detailed in Sec A.5. CXR images are resized to 256 × 256 [21]. During fine-tuning, all model parameters are updated, including the pre-trained vision encoder and linear classifier. The AdamW optimizer is used with a learning rate of 1 × 10 -4 and a batch size of 64 for 50 epochs. Evaluation is based on the AUC score, adhering to the protocol outlined in [8, 9, 12]. Table 1: Results of semantic segmentation and object detection. Best results are highlighted in bold, with '-' denoting mAP values &lt; 1%. Methods with ⋆ use disease-level annotations. '/' indicates object detection not deployable with encoder-decoder architecture. The MedKLIP results in this table differ from the original work [5] because MedKLIP fine-tuned the encoder in their original study, whereas other methods froze the encoder. To ensure fairness, we reimplemented MedKLIP with the frozen encoder for comparison in this table. Additionally, for a fair comparison specifically with MedKLIP, we compare G2D with MedKLIP under its original configuration in Tab 7 and Sec A.5. | Tasks | Semantic Segmentation | Semantic Segmentation | Semantic Segmentation | Semantic Segmentation | (Dice) | Semantic Segmentation | Object Detection (mAP) | Object Detection (mAP) | Object Detection (mAP) | Object Detection (mAP) | Object Detection (mAP) | Object Detection (mAP) | |------------------------|-------------------------|-------------------------|-------------------------|-------------------------|----------|-------------------------|--------------------------|--------------------------|--------------------------|--------------------------|--------------------------|--------------------------| | Datasets | | SIIM | 100% | | RSNA | 100% | | RSNA | | Object CXR | Object CXR | Object CXR | | Methods | 1% | 10% | | 1% | 10% | | 1% | 10% | 100% | 1% | 10% | 100% | | Random Init | 9.0 | 28.6 | 54.3 | 6.9 | 10.6 | 18.5 | 1.0 | 4.0 | 8.9 | - | 0.5 | 4.4 | | ImageNet Init | 10.2 | 35.5 | 63.5 | 34.8 | 39.9 | 64.0 | 3.6 | 8.0 | 15.7 | - | 2.9 | 8.3 | | ConVIRT [4] | 25.0 | 43.2 | 59.9 | 55.0 | 67.4 | 67.5 | 8.2 | 15.6 | 17.9 | - | 8.6 | 15.9 | | GLoRA [8] | 35.8 | 46.9 | 63.4 | 59.3 | 67.5 | 67.8 | 9.8 | 14.8 | 18.8 | - | 10.6 | 15.6 | | GLoRIA-MIMIC [8] | 37.4 | 57.1 | 64.0 | 60.3 | 68.7 | 68.3 | 11.6 | 16.1 | 24.8 | - | 8.90 | 16.6 | | MGCA [9] | 49.7 | 59.3 | 64.2 | 63.0 | 68.3 | 69.8 | 12.9 | 16.8 | 24.9 | - | 12.1 | 19.2 | | M-FLAG [6] | 52.5 | 61.2 | 64.8 | 64.6 | 69.7 | 70.5 | 13.7 | 17.5 | 25.4 | - | 12.4 | 19.3 | | MedKLIP ⋆ [5] | 50.2 | 60.8 | 63.9 | 66.2 | 69.4 | 71.9 | 8.9 | 16.3 | 24.5 | - | 7.1 | 11.6 | | Ours (encoder) | 62.6 | 63.1 | 66.8 | 70.9 | 72.6 | 75.1 | 15.9 | 21.7 | 27.2 | 3.8 | 13.1 | 20.4 | | Ours (encoder-decoder) | 65.6 | 66.9 | 68.4 | 72.8 | 73.4 | 76.9 | | | / | | | | Medical Image Linear Classification. In strict accordance with the configuration in [8, 4, 9], this task is conducted on the CheXpert [35], RSNA [31], and COVIDx [37] datasets. We only update a randomly initialized linear classification layer, while the pre-trained vision encoder remains frozen. For fair evaluation, we employ AUC scores on CheXpert and RSNA, along with accuracy metrics on COVIDx, as mentioned in [8, 9]. Apart from zero-shot image classification and visual grounding, we fine-tune using 1% 10% 100% , , of the training data for all downstream tasks. Detailed settings, including implementation and data splits, are outlined in Sec A.4. ## 4.2 Performance on Visual Localisation Tasks In Tab 1, following [16, 38], we evaluate G2D alongside other SOTA approaches on two pivotal visual localisation tasks: semantic segmentation and object detection. The aim is to assess the efficacy of the dense visual features learned. Initially, we transfer only the encoder weights from the pre-trained G2D for the segmentation task, adhering to the protocols of [9, 8, 4, 6]. In this setup, our approach consistently achieves the highest performance across all data fractions for both SIIM [32] and RSNA datasets [31]. To assess the impact of the visual decoder pre-trained with the PS pretext task, we transfer the weights of both the encoder and decoder from G2D for the segmentation task, resulting in striking outcomes. Remarkably, with just 1% of training data, G2D surpasses the performance of all peer methods, even those trained with a full 100% of training data. This observation underlines the fact that the pixel-level pretext task, PS, significantly improves the quality of dense visual features derived from VLP, which provide advantages for the downstream segmentation task. In object detection, our method consistently outperforms existing methods across all data fractions for both RSNA and Object-CXR datasets [31, 33]. Notably, G2D achieves a 3.8% mAP on the Object-CXR dataset with just 1% of the data for fine-tuning, a significant leap from other methods that scarcely reach a 1% mAP. These results highlight the efficacy of our proposed model, G2D, and the pretext task, PS, especially in semantic segmentation tasks that rely on dense visual features. PS not only enables G2D to learn visual representations in the encoder-decoder structure but also reduces the gap between pretraining and downstream tasks. By enhancing the encoder's ability to capture global and dense features simultaneously, PS surpasses existing approaches, proving particularly advantageous for object detection tasks that heavily rely on dense features [39]. ## 4.3 Performance on Vision-Language Understanding In Tab 2, we evaluate the efficacy of G2D on vision-language understanding tasks, zero-shot visual grounding and zero-shot image classification. For the zero-shot visual grounding task, our proposed method outperforms peer approaches. Specifically, on the SIIM dataset [32], it achieves a leading Dice score of 5.1. This dominance persists in the RSNA dataset [31], where our method reaches a Table 2: Comparison between G2D (ours) and various other medical VLP methods in vision-language understanding tasks, with the best results emphasized in bold. Methods marked with ⋆ utilize extra annotated data during pre-training. '/' indicates that the original work did not report the results. Notably, KAD [21] does not report ACC for the CheXpert dataset. (a) Results of zero-shot visual grounding task. (b) Results of zero-shot image classification task. | Task | Zero-shot Visual Grounding | Zero-shot Visual Grounding | Zero-shot Visual Grounding | Zero-shot Visual Grounding | Zero-shot Visual Grounding | Zero-shot Visual Grounding | |------------------|------------------------------|------------------------------|------------------------------|------------------------------|------------------------------|------------------------------| | Datasets Methods | Recall | SIIM IoU | Dice | Recall | RSNA IoU | Dice | | GLoRIA [8] | 23.8 | 1.2 | 2.1 | 83.3 | 21.8 | 34.7 | | BioViL [40] | 19.6 | 1.7 | 2.6 | 85.2 | 30.3 | 43.9 | | MedKLIP ⋆ [5] | 35.6 | 2.1 | 4.0 | 86.6 | 31.7 | 46.5 | | Ours | 37.7 | 3.9 | 5.1 | 88.4 | 33.5 | 47.7 | | Task | Zero-shot Image Classification | Zero-shot Image Classification | Zero-shot Image Classification | Zero-shot Image Classification | Zero-shot Image Classification | Zero-shot Image Classification | Zero-shot Image Classification | Zero-shot Image Classification | Zero-shot Image Classification | Zero-shot Image Classification | Zero-shot Image Classification | |------------------|----------------------------------|----------------------------------|----------------------------------|----------------------------------|----------------------------------|----------------------------------|----------------------------------|----------------------------------|----------------------------------|----------------------------------|----------------------------------| | Datasets Methods | RSNA | RSNA | RSNA | SIIM | SIIM | SIIM | CXR14 | CXR14 | CXR14 | CheXpert | CheXpert | | | AUC | F1 | ACC | AUC | F1 | ACC | AUC | F1 | ACC | AUC | F1 | | ConVIRT [4] | 80.4 | 58.4 | 76.1 | 64.3 | 43.3 | 57.0 | 56.0 | 13.5 | 45.9 | 59.0 | 26.4 | | GLoRIA [8] | 71.5 | 49.0 | 71.3 | 53.4 | 38.2 | 40.5 | 61.0 | 17.4 | 50.3 | 75.0 | 57.0 | | BioViL [40] | 82.8 | 58.3 | 76.7 | 70.8 | 48.6 | 69.1 | 66.2 | 66.2 | 63.3 | 69.3 | 46.3 | | CheXzero ⋆ [5] | 85.8 | 62.1 | 79.4 | 68.8 | 47.0 | 54.7 | / | / | / | 88.9 | 60.6 | | MedKLIP ⋆ [5] | 86.9 | 63.4 | 80.0 | 89.2 | 68.3 | 84.3 | 72.6 | 24.4 | 79.6 | 87.9 | 61.4 | | KAD ⋆ | / | / | / | / | / | / | 78.9 | 32.3 | 81.6 | 90.5 | 64.6 | | Ours | 87.6 | 64.8 | 81.5 | 89.7 | 69.3 | 85.4 | 79.4 | 33.1 | 82.3 | 91.2 | 65.6 | Table 3: Evaluation of image classification fine-tuning on the CXR14 dataset is conducted, with all metrics presented as AUC scores, where the mean metric is macro-averaged. Best performances are highlighted in bold. Methods marked with ⋆ utilize extra annotated data for pre-training. MedKLIP's results here differ from the original study [5] as it did not utilize the official test split, unlike KAD [21]. We use the result of MedKLIP reported by KAD [21], which reimplemented MedKLIP on the official test set for fairness. All results in this table are sourced from KAD [21]. To compare fairly with MedKLIP, we assess G2D against its original configuration in Tab 7 and Sec A.5. | Data fraction | Method | Mean | Atelectasis | Cardiomegaly | Effusion | Infiltration | Mass | Nodule | Pneumonia | Pneumothorax | Consolidation | Edema | Emphysema | Fibrosis | Pleural Thicken | Hernia | |-----------------|--------------------------------------------------------------------------------------------|------------------------------------|------------------------------------|------------------------------------|--------------------------|--------------------------|-----------|----------------|----------------|--------------------------|--------------------------|-------------------------------|--------------------------|-------------------------------|--------------------------|--------------------------| | 1% | Random Init ImageNet Init ConVIRT [4] GLoRIA [8] BioViL [40] MedKLIP ⋆ [5] KAD ⋆ [21] Ours | 58.1 63.5 64.9 59.7 57.9 60.9 78.7 | 55.7 66.2 66.0 59.7 55.5 65.5 77.0 | 57.7 64.2 78.2 56.7 56.4 59.0 88.2 | 63.6 72.1 78.9 74.1 72.2 | 61.6 57.0 61.1 64.6 65.0 | 55.0 56.7 | 60.2 58.5 65.5 | 57.1 60.0 60.8 | 58.2 62.6 68.8 60.7 56.0 | 60.8 62.4 65.7 66.5 65.7 | 63.3 66.8 60.7 66.9 68.1 68.2 | 53.4 61.5 65.8 55.0 51.6 | 63.7 70.7 68.0 55.8 51.3 64.8 | 56.8 63.1 62.7 59.2 59.2 | 46.0 64.5 46.6 43.6 36.0 | | 1% | | | | | | | 59.0 | | | | | | | | | | | 1% | | | | | | | 59.6 | | | | | | | | | | | 1% | | | | | | | 55.9 | 55.7 | 61.1 | | | | | | | | | 1% | | | | | 74.5 | 64.3 | 55.0 | 54.6 61.1 | 62.6 60.9 | 59.9 | 65.9 | | 53.5 | | 59.3 | 40.0 | | 1% | | | | | 82.9 | 69.2 | 75.1 | 69.7 | 73.5 | 86.1 | 72.7 | 81.3 | 89.3 | 74.3 | 69.2 | 93.8 | | 1% | | 79.1 | 78.1 | 88.3 | 83.1 | 70.2 | 75.4 | 69.7 | 74.0 | 86.5 | 72.9 | 81.6 | 90.2 | 74.4 | 69.5 | 94.1 | | 10% | Random Init ImageNet Init ConVIRT [4] GLoRIA [8] BioViL [40] ⋆ | 69.1 | 68.2 | 76.6 | 74.6 | 67.4 | 62.3 | 58.0 | 63.6 | 72.8 | 67.8 | 78.0 | 64.7 | 71.5 | 65.3 | 77.1 | | 10% | | 72.6 | 70.9 | 79.8 | 76.9 | 68.4 | 69.3 | 65.6 | 63.0 | 79.3 | 67.1 | 76.7 | 74.9 | 72.9 | 71.1 | 81.0 | | 10% | | 77.1 | 74.0 | 84.3 | 81.1 | 69.3 | 74.8 | 70.0 | 67.1 | 82.8 | 70.1 | 81.4 | 87.1 | 76.7 | 71.9 | 89.3 | | 10% | | 74.3 | 72.1 | 80.8 | 80.0 | 68.7 | 73.3 | 67.5 | 65.8 | 77.9 | 67.6 | 79.7 | 79.9 | 78.7 | 69.3 | 78.7 | | 10% | | 72.7 | 70.3 | 78.5 | 79.0 | 66.6 | 71.8 | 67.1 | 66.5 | 76.7 | 68.4 | 79.9 | 76.1 | 74.8 | 65.3 | 76.3 | | 10% | MedKLIP [5] | 74.8 | 72.9 | 80.2 | 79.3 | 69.8 | 71.9 | 68.1 | 66.6 | 79.6 | 69.6 | 81.1 | 79.5 | 75.6 | 71.3 | 81.9 | | 10% | KAD ⋆ [21] | 80.7 | 77.6 | 88.9 | 83.3 | 71.8 | 78.3 | 71.9 | 73.7 | 87.2 | 75.0 | 83.3 | 90.3 | 80.7 | 72.3 | 95.3 | | 10% | Ours | 81.1 | 78.4 | 89.3 | 83.7 | 72.2 | 78.8 | 72.3 | 74.1 | 87.8 | 75.3 | 84.0 | 90.4 | 80.8 | 72.5 | 95.4 | | 100% | Random Init ImageNet Init | 79.0 | 75.0 | 87.9 | 81.5 | 69.1 | 79.8 | 72.6 | 70.3 | 82.6 | 73.1 | 83.9 | 83.5 | 80.7 | 75.4 | 90.3 | | 100% | | 80.4 | 76.3 | 86.7 | 82.3 | 69.3 | 82.3 | 76.3 | 71.9 | 84.0 | 73.7 | 84.2 | 89.3 | 81.9 | 77.0 | 89.9 | | 100% | ConVIRT [4] | 80.8 | 77.1 | 86.7 | 82.5 | 70.3 | 81.8 | 76.1 | 72.2 | 85.7 | 74.7 | 85.4 | 90.1 | 80.9 | 77.1 | 90.9 | | 100% | GLoRIA [8] | 80.0 | 76.0 | 85.5 | 81.8 | 70.0 | 81.4 | 74.9 | 71.5 | 82.8 | 73.9 | 83.2 | 88.7 | 81.3 | 76.7 | 92.1 | | 100% | BioViL [40] | 80.0 | 76.5 | 87.1 | 82.4 | 69.7 | 81.9 | 75.2 | 71.0 | 84.5 | 74.2 | 84.2 | 87.1 | 82.1 | 75.9 | 88.8 | | 100% | MedKLIP ⋆ [5] | 80.1 | 76.4 | 84.9 | 82.3 | 69.7 | 82.0 | 74.7 | 71.2 | 83.9 | 75.1 | 84.8 | 87.9 | 81.7 | 77.7 | 89.2 | | 100% | KAD ⋆ [21] | 82.5 | 78.5 | 89.7 | 84.0 | 71.3 | 83.6 | 77.1 | 74.0 | 87.4 | 75.3 | 86.0 | 91.6 | 82.9 | 77.8 | 96.1 | | 100% | Ours | 83.1 | 79.9 | 90.2 | 84.5 | 71.8 | 84.2 | 78.0 | 74.2 | 87.7 | 75.6 | 86.9 | 92.0 | 83.1 | 78.2 | 96.5 | Dice score of 47.7, surpassing other SOTA approaches. When examining zero-shot image classification, our method again shows its superiority across the AUC, F1, and ACC metrics on both the RSNA [31] and SIIM datasets [32]. Such consistent and superior outcomes underscore the adaptability and effectiveness of G2D in handling vision-language understanding tasks, indicating that integrating PS into G2D can enhance not only uni-modal but also cross-modal tasks. ## 4.4 Performance on Visual Recognition Tasks In our final assessment focused on visual recognition, Tab 3 demonstrates our method's consistent supremacy on the CXR14 dataset [36] for fine-tuned disease classification across 1%, 10%, and 100% training data. Similarly, Tab 4 underscores that G2D achieves the highest performance on the CheXpert, RSNA, and COVIDx datasets [35, 31, 37] for linear evaluation across all training data ratio. Notably, G2D consistently outperforms even those methods like MedKLIP and KAD [41] that leverage additional disease-level annotations during pre-training stage. This demonstrates G2D's representative visual features, suggesting that enhancing dense representation learning via PS can also improve results in tasks primarily anchored on global representation. Table 4: Linear classification results for CheXpert, RSNA, and COVIDx datasets with 1%, 10%, and 100% training data. The best results are highlighted in bold. Methods with ⋆ leverage disease-level annotations for pre-training. The evaluation metric follows [9]. | Datasets (Metric) | CheXpert (AUC) | CheXpert (AUC) | CheXpert (AUC) | RSNA (AUC) | RSNA (AUC) | RSNA (AUC) | COVIDx (ACC) | COVIDx (ACC) | COVIDx (ACC) | |---------------------|------------------|------------------|------------------|--------------|--------------|--------------|----------------|----------------|----------------| | Methods | 1% | 10% | 100% | 1% | 10% | 100% | 1% | 10% | 100% | | Random Init | 56.1 | 62.6 | 65.7 | 58.9 | 69.4 | 74.1 | 50.5 | 60.3 | 70.0 | | ImageNet Init | 74.4 | 79.7 | 81.4 | 74.9 | 74.5 | 76.3 | 64.8 | 78.8 | 86.3 | | ConVIRT [4] | 85.9 | 86.8 | 87.3 | 77.4 | 80.1 | 81.3 | 72.5 | 82.5 | 92.0 | | GLoRIA [8] | 86.6 | 87.8 | 88.1 | 86.1 | 88.0 | 88.6 | 67.3 | 77.8 | 89.0 | | GLoRIA-MIMIC [8] | 87.1 | 88.7 | 88.0 | 87.0 | 89.4 | 90.2 | 66.5 | 80.5 | 88.8 | | MGCA [9] | 87.6 | 88.0 | 88.2 | 88.6 | 89.1 | 89.9 | 72.0 | 83.5 | 90.5 | | MRM[12] | 88.5 | 88.5 | 88.7 | 91.3 | 92.7 | 93.3 | 66.9 | 79.3 | 90.8 | | MedKLIP ⋆ [5] | 86.2 | 86.5 | 87.7 | 87.3 | 88.0 | 89.3 | 74.5 | 85.2 | 90.3 | | Ours | 89.7 | 90.4 | 91.1 | 92.2 | 92.9 | 93.6 | 76.6 | 88.2 | 93.4 | Table 5: Results of various ablation experiments. The best results are bolded. (a) Loss for the decoder. 'None' indicates Encoder-Only visual backbones. (b) Threshold for constructing pseudo segmentation masks. (c) Ablation of the number of dimensions of projectors. | Decoder Loss | SIIM Dice | RSNA mAP | CXR14 AUC | |-------------------|-------------|------------|-------------| | None | 49.2 ± 1.5 | 11.7 ± 1.2 | 77.1 ± 1.5 | | Reconstruction | 53.4 ± 1.3 | 13.0 ± 0.9 | 77.3 ± 2.1 | | Pseudo Seg (Ours) | 65.6 ± 1.7 | 15.9 ± 0.8 | 79.1 ± 1.2 | | Threshold | SIIM Dice | RSNA mAP | CXR14 AUC | |----------------|-------------|------------|-------------| | 85% percentile | 65.6 ± 1.7 | 15.9 ± 0.8 | 79.1 ± 1.2 | | 75% percentile | 63.0 ± 2.1 | 14.1 ± 1.2 | 78.3 ± 2.0 | | median | 58.8 ± 1.6 | 12.5 ± 2.3 | 75.6 ± 1.1 | | GMM[42] | 59.2 ± 1.5 | 12.9 ± 1.4 | 75.2 ± 1.9 | | Num of Dim | SIIM Dice | RSNA mAP | CXR14 AUC | |--------------|-------------|------------|-------------| | 128 | 65.6 ± 1.7 | 15.9 ± 0.8 | 79.1 ± 1.2 | | 256 | 64.9 ± 1.9 | 16.1 ± 1.1 | 78.3 ± 1.5 | | 512 | 64.6 ± 1.2 | 15.7 ± 1.0 | 78.0 ± 1.3 | (d) Ablation of multi-head attention maps aggregation. (e) Number of attention (f) Refinement of Pseudo Segmenta- | Method | SIIM Dice | RSNA mAP | CXR14 AUC | |-----------------|-------------|------------|-------------| | w Aggregation | 65.6 ± 1.7 | 15.9 ± 0.8 | 79.1 ± 1.2 | | w/o Aggregation | 62.1 ± 2.2 | 13.5 ± 1.7 | 77.5 ± 2.3 | heads. tion Masks | Heads | SIIM Dice | RSNA mAP | CXR14 AUC | |---------|-------------|------------|-------------| | 1 | 63.4 ± 2.0 | 14.2 ± 1.4 | 78.2 ± 1.0 | | 2 | 64.7 ± 1.6 | 15.1 ± 2.3 | 78.8 ± 1.5 | | 3 | 65.6 ± 1.7 | 15.9 ± 0.8 | 79.1 ± 1.2 | | 4 | 65.3 ± 1.6 | 15.4 ± 0.9 | 78.7 ± 1.9 | | | SIIM Dice | RSNA mAP | CXR14 AUC | |--------------------|-------------|------------|-------------| | w/o body mask | 63.4 ± 1.5 | 15.3 ± 2.1 | 78.4 ± 1.6 | | w/o edge smoothing | 64.1 ± 1.2 | 15.2 ± 1.7 | 78.5 ± 2.2 | | w both (Ours) | 65.6 ± 1.7 | 15.9 ± 0.8 | 79.1 ± 1.2 | ## 4.5 Ablation Studies Pseudo Segmentation vs. Reconstruction. In Tab 5a, we evaluate the impact of the proposed PS pretext task in comparison to pixel reconstruction and models without a decoder-level constraint. The model pre-trained with PS outperforms the other two approaches across all three downstream tasks, particularly in semantic segmentation. While the model pre-trained with a pixel reconstruction constraint exhibit improved performance compared to unconstrained variants, such models still underperform the model with the PS constraint. These results underscore the effectiveness of decoderlevel pretext tasks and suggest that an emphasis on high-level semantics, derived from PS, is more beneficial than focusing on the low-level semantics from pixel reconstruction. The PS potentially reduces the disparity between features learned through VLP and those required by downstream semantic segmentation tasks. It also enables the model to acquire more representative features that are beneficial for various tasks. Threshold of Pseudo Mask Construction. As shown in Tab 5b, performance varies with different thresholds, with the 85% percentile threshold proving most effective across all three downstream tasks. Despite employing the Gaussian Mixture Model (GMM) for pseudo mask creation, as suggested by [42], its performance is still surpassed by the 85% percentile approach. This indicates that the original attention map might contain noise, and a higher threshold is beneficial for generating more effective pseudo masks. Furthermore, Tab 5d highlights the importance of aggregating multi-head attention maps for mask construction. Given the absence of explicit semantic supervision in the PS pretext task, not aggregating these maps leads to the creation of multiple pseudo masks. This excess of masks introduce ambiguous training objectives for VLP. Impact of Mask Refinement. Refinement of the pseudo masks affects the model's efficacy, as shown in Tab 5f. Performance tends to decrease when either the body mask is omitted or edge smoothing is not applied. However, integrating both these strategies, as we implement in G2D, yields optimal results. This underscores the vital role of pseudo mask refinement in enhancing model performance. Ablation on Hyperparameters. We further ablate the number of attention heads and projector dimensionality. Performance improves with more attention heads, peaking at 3 before slightly declin- ing at 4 (Tab 5e). Optimal segmentation and classification results are achieved with 128-dimensional projectors. While 256 dimensions provide slight benefits for object detection, they reduce performance in other tasks (Tab 5c). Projectors of 512 dimensions do not yield further gains. Thus, we select 3 attention heads and 128-dimensional projectors for an optimal balance of complexity and effectiveness. ## 5 Conclusion In this study, we introduce G2D, a novel medical VLP framework for learning global and dense-level representations. Our proposed pixel-level pretext task, pseudo segmentation, leverages a refined attention map to predict a pseudo mask, capturing dense visual features during VLP without requiring additional trainable parameters for its construction. Our model pretrained with this pretext task achieves superior performance across five diverse medical imaging tasks and outperforms methods pretrained with annotated data [5, 21], especially in semantic segmentation. Specifically, on the SIIM [32] dataset, G2D, when fine-tuned with only 1% of the training data, outperforms other medical VLP approaches that utilize the full 100% training set. We anticipate that G2D will inspire further exploration of novel and clinically-guided pretext tasks for medical VLP. ## References - [1] O. Ronneberger, P. Fischer, and T. Brox, 'U-net: Convolutional networks for biomedical image segmentation,' in Medical Image Computing and Computer-Assisted Intervention-MICCAI 2015: 18th International Conference, Munich, Germany, October 5-9, 2015, Proceedings, Part III 18 . Springer, 2015, pp. 234-241. - [2] C. Liu, S. Cheng, M. Shi, A. Shah, W. Bai, and R. Arcucci, 'Imitate: Clinical prior guided hierarchical vision-language pre-training,' arXiv preprint arXiv:2310.07355 , 2023. - [3] A. Radford, J. W. Kim, C. Hallacy, A. Ramesh, G. Goh, S. Agarwal, G. Sastry, A. Askell, P. Mishkin, J. Clark et al. , 'Learning transferable visual models from natural language supervision,' in International Conference on Machine Learning . PMLR, 2021, pp. 8748-8763. - [4] Y. Zhang, H. Jiang, Y. Miura, C. D. Manning, and C. P. Langlotz, 'Contrastive learning of medical visual representations from paired images and text,' arXiv preprint arXiv:2010.00747 , 2020. - [5] C. Wu, X. Zhang, Y. Zhang, Y. Wang, and W. Xie, 'Medklip: Medical knowledge enhanced languageimage pre-training,' medRxiv , pp. 2023-01, 2023. - [6] C. Liu, S. Cheng, C. Chen, M. Qiao, W. Zhang, A. Shah, W. Bai, and R. Arcucci, 'M-flag: Medical vision-language pre-training with frozen language models and latent space geometry optimization,' in International Conference on Medical Image Computing and Computer-Assisted Intervention . Springer, 2023, pp. 637-647. - [7] E. Tiu, E. Talius, P. Patel, C. P. Langlotz, A. Y . Ng, and P. Rajpurkar, 'Expert-level detection of pathologies from unannotated chest x-ray images via self-supervised learning,' Nature Biomedical Engineering , pp. 1-8, 2022. - [8] S.-C. Huang, L. Shen, M. P. Lungren, and S. Yeung, 'Gloria: A multimodal global-local representation learning framework for label-efficient medical image recognition,' in Proceedings of the IEEE/CVF International Conference on Computer Vision , 2021, pp. 3942-3951. - [9] F. Wang, Y. Zhou, S. Wang, V. Vardhanabhuti, and L. Yu, 'Multi-granularity cross-modal alignment for generalized medical visual representation learning,' arXiv preprint arXiv:2210.06044 , 2022. - [10] C. Liu, A. Shah, W. Bai, and R. Arcucci, 'Utilizing synthetic data for medical vision-language pretraining: Bypassing the need for real images,' arXiv preprint arXiv:2310.07027 , 2023. - [11] P. Cheng, L. Lin, J. Lyu, Y. Huang, W. Luo, and X. Tang, 'Prior: Prototype representation joint learning from medical images and reports,' arXiv preprint arXiv:2307.12577 , 2023. - [12] H.-Y. Zhou, C. Lian, L. Wang, and Y. Yu, 'Advancing radiograph representation learning with masked record modeling,' in The Eleventh International Conference on Learning Representations . - [13] Y. Liu, S. Zhang, J. Chen, K. Chen, and D. Lin, 'Pixmim: Rethinking pixel reconstruction in masked image modeling,' arXiv preprint arXiv:2303.02416 , 2023. - [14] K. He, X. Chen, S. Xie, Y. Li, P. Dollár, and R. Girshick, 'Masked autoencoders are scalable vision learners,' in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , 2022, pp. 16 000-16 009. | [15] | Y. Liu, S. Zhang, J. Chen, Z. Yu, K. Chen, and D. Lin, 'Improving pixel-based mim by reducing wasted modeling capability,' in Proceedings of the IEEE/CVF International Conference on Computer Vision , 2023, pp. 5361-5372. | |--------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | [16] | L. H. Li, P. Zhang, H. Zhang, J. Yang, C. Li, Y. Zhong, L. Wang, L. Yuan, L. Zhang, J.-N. Hwang et al. , 'Grounded language-image pre-training,' in Proceedings of the IEEE/CVF Conference on Computer Vi- sion and Pattern Recognition , 2022, pp. 10 965-10 975. | | [17] | Y. Gao, J. Liu, Z. Xu, J. Zhang, K. Li, R. Ji, and C. Shen, 'Pyramidclip: Hierarchical feature alignment for vision-language model pretraining,' Advances in neural information processing systems , vol. 35, pp. 35 959-35 970, 2022. | | [18] | C. Zhou, C. C. Loy, and B. Dai, 'Extract free dense labels from clip,' in European Conference on Com- puter Vision . Springer, 2022, pp. 696-712. | | [19] | H. Luo, J. Bao, Y. Wu, X. He, and T. Li, 'Segclip: Patch aggregation with learnable centers for open- vocabulary semantic segmentation,' in International Conference on Machine Learning . PMLR, 2023, pp. 23 033-23 044. | | [20] | Z. Wan, C. Liu, M. Zhang, J. Fu, B. Wang, S. Cheng, L. Ma, C. Quilodrán-Casas, and R. Arcucci, 'Med- unic: Unifying cross-lingual medical vision-language pre-training by diminishing bias,' arXiv preprint arXiv:2305.19894 , 2023. | | [21] | X. Zhang, C. Wu, Y. Zhang, W. Xie, and Y. Wang, 'Knowledge-enhanced visual-language pre-training on chest radiology images,' Nature Communications , vol. 14, no. 1, p. 4542, 2023. | | [22] | W. Huang, H. Zhou, C. Li, H. Yang, J. Liu, and S. Wang, 'Enhancing representation in radiography- reports foundation model: A granular alignment algorithm using masked contrastive learning,' arXiv preprint arXiv:2309.05904 , 2023. | | [23] | J. Ma, Y. He, F. Li, L. Han, C. You, and B. Wang, 'Segment anything in medical images,' Nature Com- munications , vol. 15, no. 1, p. 654, 2024. | | [24] | A. E. Johnson, T. J. Pollard, N. R. Greenbaum, M. P. Lungren, C.-y. Deng, Y. Peng, Z. Lu, R. G. Mark, S. J. Berkowitz, and S. Horng, 'Mimic-cxr-jpg, a large publicly available database of labeled chest radio- graphs,' arXiv preprint arXiv:1901.07042 , 2019. | | [25] | A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin, 'Attention is all you need,' Advances in neural information processing systems , vol. 30, 2017. | | [26] | C. Ouyang, C. Biffi, C. Chen, T. Kart, H. Qiu, and D. Rueckert, 'Self-supervision with superpixels: Training few-shot medical image segmentation without annotation,' in Computer Vision-ECCV 2020: 16th European Conference, Glasgow, UK, August 23-28, 2020, Proceedings, Part XXIX 16 . Springer, 2020, pp. 762-780. | | [27] | F. Isensee, P. F. Jaeger, S. A. Kohl, J. Petersen, and K. H. Maier-Hein, 'nnu-net: a self-configuring method for deep learning-based biomedical image segmentation,' Nature methods , vol. 18, no. 2, pp. 203-211, 2021. | | [28] | C. Tomasi and R. Manduchi, 'Bilateral filtering for gray and color images,' in Sixth international confer- ence on computer vision (IEEE Cat. No. 98CH36271) . IEEE, 1998, pp. 839-846. | | [29] | A. E. Johnson, T. J. Pollard, N. R. Greenbaum, M. P. Lungren, C.-y. Deng, Y. Peng, Z. Lu, R. G. Mark, S. J. Berkowitz, and S. Horng, 'Mimic-cxr-jpg, a large publicly available database of labeled chest radio- graphs,' arXiv preprint arXiv:1901.07042 , 2019. | | [30] | E. Alsentzer, J. R. Murphy, W. Boag, W.-H. Weng, D. Jin, T. Naumann, and M. McDermott, 'Publicly available clinical bert embeddings,' arXiv preprint arXiv:1904.03323 , 2019. | | [31] | G. Shih, C. C. Wu, S. S. Halabi, M. D. Kohli, L. M. Prevedello, T. S. Cook, A. Sharma, J. K. Amorosa, V. Arteaga, M. Galperin-Aizenberg et al. , 'Augmenting the national institutes of health chest radiograph dataset with expert annotations of possible pneumonia,' Radiology: Artificial Intelligence , vol. 1, no. 1, p. e180041, 2019. | | [32] | C. Steven G. Langer, PhD and M. George Shih, MD, 'Siim-acr pneumothorax segmentation,' 2019. | | [33] | J. Healthcare, 'Object-cxr-automatic detection of foreign objects on chest x-rays,' 2020. | | [34] | J. Redmon and A. Farhadi, 'Yolov3: An incremental improvement,' arXiv preprint arXiv:1804.02767 , | - [34] J. Redmon and A. Farhadi, 'Yolov3: An incremental improvement,' arXiv preprint arXiv:1804.02767 , 2018. - [35] J. Irvin, P. Rajpurkar, M. Ko, Y. Yu, S. Ciurea-Ilcus, C. Chute, H. Marklund, B. Haghgoo, R. Ball, K. Shpanskaya et al. , 'Chexpert: A large chest radiograph dataset with uncertainty labels and expert comparison,' in Proceedings of the AAAI conference on artificial intelligence , vol. 33, 2019, pp. 590597. | [36] | X. Wang, Y. Peng, L. Lu, Z. Lu, M. Bagheri, and R. M. Summers, 'Chestx-ray8: Hospital-scale chest x-ray database and benchmarks on weakly-supervised classification and localization of common thorax diseases,' in Proceedings of the IEEE conference on computer vision and pattern recognition , 2017, pp. 2097-2106. | |--------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | [37] | L. Wang, Z. Q. Lin, and A. Wong, 'Covid-net: A tailored deep convolutional neural network design for detection of covid-19 cases from chest x-ray images,' Scientific reports , vol. 10, no. 1, pp. 1-12, 2020. | | [38] | H. Zhang, P. Zhang, X. Hu, Y.-C. Chen, L. Li, X. Dai, L. Wang, L. Yuan, J.-N. Hwang, and J. Gao, 'Glipv2: Unifying localization and vision-language understanding,' Advances in Neural Information Pro- cessing Systems , vol. 35, pp. 36 067-36 080, 2022. | | [39] | T.-Y. Lin, P. Dollár, R. Girshick, K. He, B. Hariharan, and S. Belongie, 'Feature pyramid networks for object detection,' in Proceedings of the IEEE conference on computer vision and pattern recognition , 2017, pp. 2117-2125. | | [40] | B. Boecking, N. Usuyama, S. Bannur, D. C. Castro, A. Schwaighofer, S. Hyland, M. Wetscherek, T. Nau- mann, A. Nori, J. Alvarez-Valle et al. , 'Making the most of text semantics to improve biomedical vision- language processing,' in European conference on computer vision . Springer, 2022, pp. 1-21. | | [41] | C. Wu, X. Zhang, Y. Zhang, Y. Wang, and W. Xie, 'Medklip: Medical knowledge enhanced language- image pre-training,' medRxiv , pp. 2023-01, 2023. | | [42] | M. Dombrowski, H. Reynaud, M. Baugh, and B. Kainz, 'Foreground-background separation through concept distillation from generative image foundation models,' in Proceedings of the IEEE/CVF Interna- tional Conference on Computer Vision , 2023, pp. 988-998. | | [43] | A. Saporta, X. Gui, A. Agrawal, A. Pareek, S. Q. Truong, C. D. Nguyen, V.-D. Ngo, J. Seekins, F. G. Blankenberg, A. Y. Ng et al. , 'Benchmarking saliency methods for chest x-ray interpretation,' Nature Machine Intelligence , vol. 4, no. 10, pp. 867-878, 2022. | ## A Appendix / supplemental material ## A.1 Limitations and Future Work Our work primarily concentrates on learning dense visual representations from pseudo masks, which are generated from attention masks under language supervision. Due to the weak supervision signal, the pseudo masks may not effectively associate each pixel with the corresponding text tokens, potentially capping the performance of our method. Currently, our approach involves learning both global and pixel-level representations through VLP. In future studies, we aim to delve into regional visual representations during VLP to establish more precise correlations between specific chest X-ray (CXR) regions and phrases in radiology reports. ## A.2 Broader Impacts Our G2D model offers an effective approach for the automatic diagnosis of chest X-ray abnormalities using a small amount of annotated data. This can help decrease the burden on radiologists and enhance healthcare in underprivileged regions. However, medical data, such as chest X-rays and radiology reports, might include sensitive or potentially harmful information. We strongly advise a thorough examination of the data prior to using our model in real-world applications. ## A.3 Pre-training Implementation Details <!-- image --> ## Report: There is no focal consolidation, Ipleural effusion or pneumothorax. Bilateral nodular opacities that most represent nipple Ishadows. The cardiomediastinal Isilhouette is normal. The imaged Jupper abdomen is unremarkable. Chronic deformity of the posterior left sixth and seventh ribs are Inoted. No acute cardiopulmonary process. likely Figure 3: An exemplar pair of X-ray image and associated clinical report from the MIMIC-CXR dataset [24]. The chest X-ray (CXR) images from the MIMIC-CXR dataset [29] are resized to dimensions of 256 × 256 and subsequently center-cropped to 224 × 224 , adhering to the procedure described in [4, 8, 9], with an example shown in Fig 3. The intensity of each image is normalized to a range of [0 , 1] . During the pre-training stage, we employ data augmentation techniques including random grayscale, random perspective, and auto contrast adjustments, using the PyTorch vision library . 1 ## A.4 Downstream Task Implementation Details The data split information into train/valid/test sets are described in Tab. 6. For all downstream tasks, except from zero-shot image classification and visual grounding, we train with 1% 10% 100% , , of the training set. The downstream tasks are deployed on a 40G A100 GPU. 1 https://pytorch.org/vision/stable/transforms.html Table 6: Details on Data Split: The symbol '/' denotes that training/validation data is not required for the zero-shot tasks. | Task | Dataset | Split | Train | Valid | Test | |--------------------------------|-------------------------------------|----------------------|----------------------|------------------|--------------------| | Linear Classification | CheXpert [35] RSNA [31] COVIDx [37] | [35] [9, 31] [9, 37] | 186,027 16,010 23988 | 5,000 5,337 5998 | 202 5,337 400 | | Fine-tuned | CXR14 [36] | [21] | 77,872 | 8,652 | 25,596 | | Image Segmentation | RSNA [31] SIIM [32] | [8, 9] [8, 9] | 16,010 8,433 | 5,337 1,807 | 5,337 1,807 | | Object Detection | RSNA [31] Object-CXR [33] | [8, 9] [9] | 16,010 6,400 | 5,337 1,600 | 5,337 1,000 | | Zero-shot Image Classification | RSNA [31] SIIM [32] CXR14 [36] | [5] [5] [36, 21] | / / / | / / / | 5,337 1,807 25,596 | | | CheXpert [35] | [43, 21] [5] | / | / | | | Zero-shot Visual | | | / | / | 500 | | Grounding | RSNA [31] | | | | 5,337 | | | SIIM [32] | [5] | / | / | 1,807 | ## A.4.1 Visual Localization Medical Image Segmentation. For the segmentation tasks on the RSNA [31] and SIIM [32] datasets, we initially employ the vision encoder from the pre-trained model. Additionally, we transfer both the vision encoder and decoder from the pre-trained model, and proceed to train the segmentation network. We implement early stopping during the training process, limiting it to 50 epochs. Alearning rate of 2e-4 and a weight decay of 0.05 are adopted. AdamW is utilized as the optimizer, with β 1 and β 2 values set at 0.9 and 0.999, respectively. For the SIIM [32] dataset, the default batch size is set at 8, while for the RSNA [31] dataset, it is set at 16. All configurations strictly adhere to the protocol provided in [9]. Medical Image Object Detection. The pneumonia detection task on RSNA [31] and foreign objects detection task on Object-CXR [33] datasets are executed on a single A100 GPU. For both datasets, early stopping is implemented during the training process, limited to 50 epochs. AdamW is employed as the optimizer across both datasets. For the RSNA [31] dataset, a batch size of 8 is set for 1% data fine-tuning with a learning rate of 2e-4, a weight decay of 1e-6, and β 1 , β 2 values of 0.9 and 0.999, respectively. For 10% and 100% data fine-tuning, the batch size is adjusted to 16, with a learning rate of 5e-4, a weight decay of 1e-6, and the same β 1 , β 2 values. Similarly, for the Object-CXR [33] dataset, a batch size of 8 is set for 1% data fine-tuning, with the identical learning rate, weight decay, and β values as the RSNA dataset. For 10% and 100% data fine-tuning, the batch size is adjusted to 16, again with a learning rate of 5e-4, a weight decay of 1e-6, and β 1 , β 2 values of 0.9 and 0.999. The IOU and NMS thresholds are set at [0.4, 0.45, 0.5, 0.55, 0.6, 0.65, 0.7, 0.75] and 0.5, respectively. All configurations are in strict compliance with the protocol delineated in [9]. ## A.4.2 Vision-Language Understanding Zero-shot Image Classification. The original CXR image goes through a two-step preprocessing routine. Initially, it is resized to the dimension of 256 × 256 , and then center cropped to 224 × 224 . Following the methodologies outlined in [8, 9], all pixel values are normalized to the range [0 , 1] . The resulting resized image is then fed through a visual encoder, followed by a visual projector to generate the image embedding ˆ v i . Simultaneously, the prompts are fed into a text encoder to obtain text embeddings ˆ l i . The classification evaluation hinges on measuring the cosine similarity between the image and text embeddings for each prompt associated with a specific class. The classification outcome is determined by comparing the cosine similarities. Specifically, if the cosine similarity between the image embedding and the positive prompt (e.g., disease ) surpasses that between the image embedding and the corresponding negative prompt (e.g., No disease ), the outcome is deemed positive. Conversely, if the reverse holds true, the outcome is negative. The prompt is designed following [7]. Zero-shot Visual Grounding. To execute this task, we adhere to the BioViL pipeline as described in [40]. The visual grounding task can be regarded as a pixel-level classification task, driven by the text prompt and the dense visual embedding. The image is fed into the visual encoder to acquire the dense feature map V i from the final convolutional layer of the image encoder, yielding a shape of C × H × W . At the same time, the prompt is processed through the text encoder and projected into the cross-modal space, resulting in ˆ l i . The cosine similarity between ˆ l i and all elements of V i at the channel level generates a similarity map. This map is then resized to match the original image size and utilized as the segmentation results to evaluate the zero-shot grounding performance. ## A.4.3 Visual Recognition We conduct evaluations on the CheXpert [35], RSNA [31], COVIDx [37], and CXR14 datasets [36]. In alignment with previous studies [8, 4, 9, 5, 21], linear classification is implemented on CheXpert [35], RSNA [31], and COVIDx [37]. Here, we update a randomly initialized linear layer while keeping the visual encoder frozen. We adhere to the official test set partition from [5, 21, 36] for a fair comparison. During our linear classification task, training is performed over 50 epochs with a learning rate of 5e-4, a batch size of 8, employing the AdamW optimizer with parameters: β 1 = 0 9 . and β 2 = 0 999 . . For the CXR14 dataset [36], we follow the experimental setup from [21], employing fine-tuned classification while updating all parameters from the visual encoder and linear layer. Images are resized to 256 × 256 and data augmentation is carried out as recommended in [21]. The AdamW optimizer is utilized with a learning rate of 1 × 10 -4 and a batch size of 64 for 50 epochs. The linear classification tasks are executed on a single A100 GPU with 40GB memory, using the vision encoder from our pre-trained model as the visual backbone. Fine-tuning is carried out on the randomly initialized linear layer for 50 epochs with early stopping, maintaining a learning rate of 5e-4 and a default batch size of 8. We set AdamW as our optimizer, with β 1 of 0.9, β 2 of 0.999, and a weight decay rate of 1e-6. ## A.5 Comparison under MedKLIP Configuration Table 7: Performance of CXR14 Classification Fine-Tuning and Segmentation Results on SIIM and RSNA using the MedKLIP Setting [5]. | | CXR14 (AUC) | CXR14 (AUC) | CXR14 (AUC) | RSNA (Dice) | RSNA (Dice) | RSNA (Dice) | SIIM (Dice) | SIIM (Dice) | SIIM (Dice) | |-----------|---------------|---------------|---------------|---------------|---------------|---------------|---------------|---------------|---------------| | | 1% | 10% | 100% | 1% | 10% | 100% | 1% | 10% | 100% | | MedKLIP ⋆ | 77.2 | 78.9 | 83.2 | 70.6 | 71.6 | 75.8 | 66.6 | 72.1 | 79.4 | | G2D(Ours) | 80.4 | 83.8 | 86.1 | 73.8 | 76.1 | 76.5 | 70.6 | 74.5 | 82.3 | To strictly compare our work with MedKLIP [5], we reimplement G2D for fine-tuning on CXR14 classification, as well as SIIM and RSNA segmentation tasks, adhering strictly to the MedKLIP configuration. This approach is necessary because the settings of MedKLIP differ significantly from the other methods that we compare to, such as [4, 8, 9, 6, 21]. Specifically, MedKLIP updates both the encoder and decoder during segmentation tasks, whereas the other methods only update the decoder and keep the encoder frozen. Moreover, MedKLIP employs its own customized data split for CXR14 classification, contrasting with KAD [21], which uses the official CXR14 dataset split. Given these differences, comparing other methods directly under the MedKLIP setting could be seen as unfair. Therefore, we conducted a separate comparison between G2D and MedKLIP using the MedKLIP setting. The results, presented in Tab 7, demonstrate that G2D outperforms MedKLIP across all tasks and data ratios, even within the MedKLIP setting. ## A.6 Verifying Pseudo Segmentation with Semantic Meaning To investigate whether the improvements in G2D come from learning dense visual features through pseudo segmentation (PS) or from treating PS as a regularization term during pre-training, we perturbed the semantic integrity of pseudo masks by randomly shuffling them on a sample-wise basis ( i.e., making images and pseudo masks unpaired). This operation detaches pseudo masks' semantic connection to the original images, ensuring that the PS task does not learn correct semantic information, but still provide regularisation to the segmentation as the pseudo mask is relatively smooth. The results are presented in Table 8. G2D with uncorrupted pseudo masks in PS (ours) significantly outperforms the results from the shuffled alternative (unpaired images and pseudo masks), not only in Table 8: Perturbation on Pseudo Masks. | Mask Construction | SIIM Dice | RSNA mAP | CXR14 AUC | |-------------------------------------------------|-------------|------------|-------------| | Pseudo Mask without Semantic Meaning (shuffled) | 50.9 ± 2.4 | 7.6 ± 1.2 | 63.7 ± 2.1 | | Pseudo Mask with Semantic Meaning (Ours) | 65.6 ± 1.7 | 15.9 ± 0.8 | 79.1 ± 1.2 | visual localisation task but also in visual recognition task. The improved performance demonstrate that the proposed G2D indeed learns transferable visual features thanks to the semantic information provided by the pseudo masks, rather than merely treating PS as a regularization mechanism. ## A.7 Pseudo Mask Visualization Figure 4: Pseudo Mask Visualization. Left: Aggregated attention map. Middle: Constructed pseudo mask for the pseudo segmentation task. Red and blue arrows point to areas related to specific text descriptions. Right: Corresponding radiology report. Red and blue text emphasize regions represented in the pseudo mask. <!-- image --> We visualize the aggregated attention map, pseudo mask, and paired medical reports in Fig 4. Intriguingly, without human annotations, both the attention map and pseudo mask successfully capture image regions corresponding to various report words. The pseudo masks manage to capture important parts of the image regions related to the highlighted words in the clinical reports, as indicated by the red and blue arrows in Fig 4. This suggests that the supervision signal for the PS pretext task is enriched by the clinical knowledge and high-level semantics, which explain why the PS pretext task may be better than the pixel reconstruction pretext task. ## NeurIPS Paper Checklist ## 1. Claims Question: Do the main claims made in the abstract and introduction accurately reflect the paper's contributions and scope? Answer: [Yes] Justification: The main claims presented in the abstract and introduction accurately represent the contributions and scope of the paper. ## Guidelines: - · The answer NA means that the abstract and introduction do not include the claims made in the paper. - · The abstract and/or introduction should clearly state the claims made, including the contributions made in the paper and important assumptions and limitations. A No or NA answer to this question will not be perceived well by the reviewers. - · The claims made should match theoretical and experimental results, and reflect how much the results can be expected to generalize to other settings. - · It is fine to include aspirational goals as motivation as long as it is clear that these goals are not attained by the paper. ## 2. Limitations Question: Does the paper discuss the limitations of the work performed by the authors? Answer: [Yes] Justification: Refer to Section A.1. ## Guidelines: - · The answer NA means that the paper has no limitation while the answer No means that the paper has limitations, but those are not discussed in the paper. - · The authors are encouraged to create a separate "Limitations" section in their paper. - · The paper should point out any strong assumptions and how robust the results are to violations of these assumptions (e.g., independence assumptions, noiseless settings, model well-specification, asymptotic approximations only holding locally). The authors should reflect on how these assumptions might be violated in practice and what the implications would be. - · The authors should reflect on the scope of the claims made, e.g., if the approach was only tested on a few datasets or with a few runs. In general, empirical results often depend on implicit assumptions, which should be articulated. - · The authors should reflect on the factors that influence the performance of the approach. For example, a facial recognition algorithm may perform poorly when image resolution is low or images are taken in low lighting. Or a speech-to-text system might not be used reliably to provide closed captions for online lectures because it fails to handle technical jargon. - · The authors should discuss the computational efficiency of the proposed algorithms and how they scale with dataset size. - · If applicable, the authors should discuss possible limitations of their approach to address problems of privacy and fairness. - · While the authors might fear that complete honesty about limitations might be used by reviewers as grounds for rejection, a worse outcome might be that reviewers discover limitations that aren't acknowledged in the paper. The authors should use their best judgment and recognize that individual actions in favor of transparency play an important role in developing norms that preserve the integrity of the community. Reviewers will be specifically instructed to not penalize honesty concerning limitations. ## 3. Theory Assumptions and Proofs Question: For each theoretical result, does the paper provide the full set of assumptions and a complete (and correct) proof? Answer: [NA] Justification: This work mainly includes empirical contributions. ## Guidelines: - · The answer NA means that the paper does not include theoretical results. - · All the theorems, formulas, and proofs in the paper should be numbered and crossreferenced. - · All assumptions should be clearly stated or referenced in the statement of any theorems. - · The proofs can either appear in the main paper or the supplemental material, but if they appear in the supplemental material, the authors are encouraged to provide a short proof sketch to provide intuition. - · Inversely, any informal proof provided in the core of the paper should be complemented by formal proofs provided in appendix or supplemental material. - · Theorems and Lemmas that the proof relies upon should be properly referenced. ## 4. Experimental Result Reproducibility Question: Does the paper fully disclose all the information needed to reproduce the main experimental results of the paper to the extent that it affects the main claims and/or conclusions of the paper (regardless of whether the code and data are provided or not)? Answer: [Yes] Justification: We provide detailed experimental configurations in Sections 4.1, A.3, and A.4. Our code will be released after acceptance. ## Guidelines: - · The answer NA means that the paper does not include experiments. - · If the paper includes experiments, a No answer to this question will not be perceived well by the reviewers: Making the paper reproducible is important, regardless of whether the code and data are provided or not. - · If the contribution is a dataset and/or model, the authors should describe the steps taken to make their results reproducible or verifiable. - · Depending on the contribution, reproducibility can be accomplished in various ways. For example, if the contribution is a novel architecture, describing the architecture fully might suffice, or if the contribution is a specific model and empirical evaluation, it may be necessary to either make it possible for others to replicate the model with the same dataset, or provide access to the model. In general. releasing code and data is often one good way to accomplish this, but reproducibility can also be provided via detailed instructions for how to replicate the results, access to a hosted model (e.g., in the case of a large language model), releasing of a model checkpoint, or other means that are appropriate to the research performed. - · While NeurIPS does not require releasing code, the conference does require all submissions to provide some reasonable avenue for reproducibility, which may depend on the nature of the contribution. For example - (a) If the contribution is primarily a new algorithm, the paper should make it clear how to reproduce that algorithm. - (b) If the contribution is primarily a new model architecture, the paper should describe the architecture clearly and fully. - (c) If the contribution is a new model (e.g., a large language model), then there should either be a way to access this model for reproducing the results or a way to reproduce the model (e.g., with an open-source dataset or instructions for how to construct the dataset). - (d) We recognize that reproducibility may be tricky in some cases, in which case authors are welcome to describe the particular way they provide for reproducibility. In the case of closed-source models, it may be that access to the model is limited in some way (e.g., to registered users), but it should be possible for other researchers to have some path to reproducing or verifying the results. ## 5. Open access to data and code Question: Does the paper provide open access to the data and code, with sufficient instructions to faithfully reproduce the main experimental results, as described in supplemental material? Answer: [Yes] Justification: Our experiments are all conducted on publicly accessible datasets, and all experiment details are illustrated in Sections 4.1, A.3, and A.4. For experiment implementation, we follow the official code of exisiting works, all code can be found in their official GitHub repository. ## Guidelines: - · The answer NA means that paper does not include experiments requiring code. - · Please see the NeurIPS code and data submission guidelines ( https://nips.cc/ public/guides/CodeSubmissionPolicy ) for more details. - · While we encourage the release of code and data, we understand that this might not be possible, so 'No' is an acceptable answer. Papers cannot be rejected simply for not including code, unless this is central to the contribution (e.g., for a new open-source benchmark). - · The instructions should contain the exact command and environment needed to run to reproduce the results. See the NeurIPS code and data submission guidelines ( https: //nips.cc/public/guides/CodeSubmissionPolicy ) for more details. - · The authors should provide instructions on data access and preparation, including how to access the raw data, preprocessed data, intermediate data, and generated data, etc. - · The authors should provide scripts to reproduce all experimental results for the new proposed method and baselines. If only a subset of experiments are reproducible, they should state which ones are omitted from the script and why. - · At submission time, to preserve anonymity, the authors should release anonymized versions (if applicable). - · Providing as much information as possible in supplemental material (appended to the paper) is recommended, but including URLs to data and code is permitted. ## 6. Experimental Setting/Details Question: Does the paper specify all the training and test details (e.g., data splits, hyperparameters, how they were chosen, type of optimizer, etc.) necessary to understand the results? Answer: [Yes] Justification: All experiment details are illustrated in Sections 4.1, A.3, and A.4. ## Guidelines: - · The answer NA means that the paper does not include experiments. - · The experimental setting should be presented in the core of the paper to a level of detail that is necessary to appreciate the results and make sense of them. - · The full details can be provided either with the code, in appendix, or as supplemental material. ## 7. Experiment Statistical Significance Question: Does the paper report error bars suitably and correctly defined or other appropriate information about the statistical significance of the experiments? Answer: [Yes] Justification: We report the error bars for all ablation studies. ## Guidelines: - · The answer NA means that the paper does not include experiments. - · The authors should answer "Yes" if the results are accompanied by error bars, confidence intervals, or statistical significance tests, at least for the experiments that support the main claims of the paper. - · The factors of variability that the error bars are capturing should be clearly stated (for example, train/test split, initialization, random drawing of some parameter, or overall run with given experimental conditions). - · The method for calculating the error bars should be explained (closed form formula, call to a library function, bootstrap, etc.) - · The assumptions made should be given (e.g., Normally distributed errors). - · It should be clear whether the error bar is the standard deviation or the standard error of the mean. - · It is OK to report 1-sigma error bars, but one should state it. The authors should preferably report a 2-sigma error bar than state that they have a 96% CI, if the hypothesis of Normality of errors is not verified. - · For asymmetric distributions, the authors should be careful not to show in tables or figures symmetric error bars that would yield results that are out of range (e.g. negative error rates). - · If error bars are reported in tables or plots, The authors should explain in the text how they were calculated and reference the corresponding figures or tables in the text. ## 8. Experiments Compute Resources Question: For each experiment, does the paper provide sufficient information on the computer resources (type of compute workers, memory, time of execution) needed to reproduce the experiments? Answer: [Yes] Justification: Refer to the first part of Sections 4.1, A.3, and A.4. ## Guidelines: - · The answer NA means that the paper does not include experiments. - · The paper should indicate the type of compute workers CPU or GPU, internal cluster, or cloud provider, including relevant memory and storage. - · The paper should provide the amount of compute required for each of the individual experimental runs as well as estimate the total compute. - · The paper should disclose whether the full research project required more compute than the experiments reported in the paper (e.g., preliminary or failed experiments that didn't make it into the paper). ## 9. Code Of Ethics Question: Does the research conducted in the paper conform, in every respect, with the NeurIPS Code of Ethics https://neurips.cc/public/EthicsGuidelines ? Answer: [Yes] Justification: This work is conducted in accordance with the NeurIPS Code of Ethics. ## Guidelines: - · The answer NA means that the authors have not reviewed the NeurIPS Code of Ethics. - · If the authors answer No, they should explain the special circumstances that require a deviation from the Code of Ethics. - · The authors should make sure to preserve anonymity (e.g., if there is a special consideration due to laws or regulations in their jurisdiction). ## 10. Broader Impacts Question: Does the paper discuss both potential positive societal impacts and negative societal impacts of the work performed? Answer: [Yes] Justification: Please refer to the Section A.2. ## Guidelines: - · The answer NA means that there is no societal impact of the work performed. - · If the authors answer NA or No, they should explain why their work has no societal impact or why the paper does not address societal impact. - · Examples of negative societal impacts include potential malicious or unintended uses (e.g., disinformation, generating fake profiles, surveillance), fairness considerations (e.g., deployment of technologies that could make decisions that unfairly impact specific groups), privacy considerations, and security considerations. - · The conference expects that many papers will be foundational research and not tied to particular applications, let alone deployments. However, if there is a direct path to any negative applications, the authors should point it out. For example, it is legitimate to point out that an improvement in the quality of generative models could be used to generate deepfakes for disinformation. On the other hand, it is not needed to point out that a generic algorithm for optimizing neural networks could enable people to train models that generate Deepfakes faster. - · The authors should consider possible harms that could arise when the technology is being used as intended and functioning correctly, harms that could arise when the technology is being used as intended but gives incorrect results, and harms following from (intentional or unintentional) misuse of the technology. - · If there are negative societal impacts, the authors could also discuss possible mitigation strategies (e.g., gated release of models, providing defenses in addition to attacks, mechanisms for monitoring misuse, mechanisms to monitor how a system learns from feedback over time, improving the efficiency and accessibility of ML). ## 11. Safeguards Question: Does the paper describe safeguards that have been put in place for responsible release of data or models that have a high risk for misuse (e.g., pretrained language models, image generators, or scraped datasets)? Answer: [NA] Justification: This work does not focus on content generation and uses clinically verified datasets for all experiments. ## Guidelines: - · The answer NA means that the paper poses no such risks. - · Released models that have a high risk for misuse or dual-use should be released with necessary safeguards to allow for controlled use of the model, for example by requiring that users adhere to usage guidelines or restrictions to access the model or implementing safety filters. - · Datasets that have been scraped from the Internet could pose safety risks. The authors should describe how they avoided releasing unsafe images. - · We recognize that providing effective safeguards is challenging, and many papers do not require this, but we encourage authors to take this into account and make a best faith effort. ## 12. Licenses for existing assets Question: Are the creators or original owners of assets (e.g., code, data, models), used in the paper, properly credited and are the license and terms of use explicitly mentioned and properly respected? Answer: [Yes] Justification: Please refer to Section 4.1. ## Guidelines: - · The answer NA means that the paper does not use existing assets. - · The authors should cite the original paper that produced the code package or dataset. - · The authors should state which version of the asset is used and, if possible, include a URL. - · The name of the license (e.g., CC-BY 4.0) should be included for each asset. - · For scraped data from a particular source (e.g., website), the copyright and terms of service of that source should be provided. - · If assets are released, the license, copyright information, and terms of use in the package should be provided. For popular datasets, paperswithcode.com/datasets has curated licenses for some datasets. Their licensing guide can help determine the license of a dataset. - · For existing datasets that are re-packaged, both the original license and the license of the derived asset (if it has changed) should be provided. - · If this information is not available online, the authors are encouraged to reach out to the asset's creators. ## 13. New Assets Question: Are new assets introduced in the paper well documented and is the documentation provided alongside the assets? Answer: [NA] Justification: There is no new assets released in this work. ## Guidelines: - · The answer NA means that the paper does not release new assets. - · Researchers should communicate the details of the dataset/code/model as part of their submissions via structured templates. This includes details about training, license, limitations, etc. - · The paper should discuss whether and how consent was obtained from people whose asset is used. - · At submission time, remember to anonymize your assets (if applicable). You can either create an anonymized URL or include an anonymized zip file. ## 14. Crowdsourcing and Research with Human Subjects Question: For crowdsourcing experiments and research with human subjects, does the paper include the full text of instructions given to participants and screenshots, if applicable, as well as details about compensation (if any)? Answer: [NA] Justification: This work has no human subjects. Guidelines: - · The answer NA means that the paper does not involve crowdsourcing nor research with human subjects. - · Including this information in the supplemental material is fine, but if the main contribution of the paper involves human subjects, then as much detail as possible should be included in the main paper. - · According to the NeurIPS Code of Ethics, workers involved in data collection, curation, or other labor should be paid at least the minimum wage in the country of the data collector. ## 15. Institutional Review Board (IRB) Approvals or Equivalent for Research with Human Subjects Question: Does the paper describe potential risks incurred by study participants, whether such risks were disclosed to the subjects, and whether Institutional Review Board (IRB) approvals (or an equivalent approval/review based on the requirements of your country or institution) were obtained? Answer: [NA] Justification: This work has no human subjects. ## Guidelines: - · The answer NA means that the paper does not involve crowdsourcing nor research with human subjects. - · Depending on the country in which research is conducted, IRB approval (or equivalent) may be required for any human subjects research. If you obtained IRB approval, you should clearly state this in the paper. - · We recognize that the procedures for this may vary significantly between institutions and locations, and we expect authors to adhere to the NeurIPS Code of Ethics and the guidelines for their institution. - · For initial submissions, do not include any information that would break anonymity (if applicable), such as the institution conducting the review.
zqLAMwVLkt
Generative Semi-supervised Graph Anomaly Detection
This work considers a practical semi-supervised graph anomaly detection (GAD) scenario, where part of the nodes in a graph are known to be normal, contrasting to the extensively explored unsupervised setting with a fully unlabeled graph. We reveal that having access to the normal nodes, even just a small percentage of normal nodes, helps enhance the detection performance of existing unsupervised GAD methods when they are adapted to the semi-supervised setting. However, their utilization of these normal nodes is limited. In this paper, we propose a novel Generative GAD approach (namely GGAD) for the semi-supervised scenario to better exploit the normal nodes. The key idea is to generate pseudo anomaly nodes, referred to as 'outlier nodes', for providing effective negative node samples in training a discriminative one-class classifier. The main challenge here lies in the lack of ground truth information about real anomaly nodes. To address this challenge, GGAD is designed to leverage two important priors about the anomaly nodes -- asymmetric local affinity and egocentric closeness -- to generate reliable outlier nodes that assimilate anomaly nodes in both graph structure and feature representations. Comprehensive experiments on six real-world GAD datasets are performed to establish a benchmark for semi-supervised GAD and show that GGAD substantially outperforms state-of-the-art unsupervised and semi-supervised GAD methods with varying numbers of training normal nodes.
https://openreview.net/pdf/3c33b4f4c3c23708a8d12f3c6cbda3a20a9ca71e.pdf
[ { "confidence": 4, "rating": 5, "review_id": "Z2QN5ZkVlh", "review_text": "This paper works on node anomaly detection in the novel semi-supervised setting where few labeled normal nodes are given and proposes to generate new anomaly nodes to boost the training data. The anomaly generation algorithm is inspired by the empirical observation that:\n\n(1) Anomaly nodes have lower affinity score than normal nodes\n(2) Feature distribution of anomaly nodes are similar to normal nodes if they share similar neighborhood patterns.\n\n(1) The setting is novel and aligned to the real-world situation where normal nodes are typically known compared with anomaly nodes.\n\n(2) The motivation for the proposed two regularization losses is very intuitive and clear.\n\n(3) The experimental results are very impressive.\n\n(1) The proposed two regularization losses are heavily based on the empirical analysis, which might not transfer to other anomalies in other datasets. \n\n(2) For the second prior, its assumption that anomaly nodes sharing similar local structures would share a similar feature distribution has not been empirically verified.\n\n(3) Experiments miss the comparison with diffusion-based generative anomaly detection baseline.\n\n(1) As stated in the weakness, the core regularization loss terms are designed based on two assumptions:\n* The anomaly nodes have a lower affinity score than normal nodes. However, there is no comprehensive experimental verification of the other datasets on this. It might be better to provide the verification like Figure 1 but on more different datasets.\n* Anomaly nodes sharing similar neighborhood structures should possess similar feature distributions to their corresponding normal nodes. Although some references have been attached to justify this hypothesis, it might be better to include some empirical verification on this as well.\n\nFurthermore, there might be some contradiction between these two assumptions by themselves. First, if assumption 1 holds, it means anomaly nodes should share different local subgraphs with the normal nodes, which indicates that assumption 2 cannot hold. How do we mediate this situation?\n\n(2) Is there any difficulty when optimizing the loss according to Eq. (4) and Eq. (5) at the same time? Firstly, for Eq. (4), since the fixed terms would be embeddings of normal nodes and their neighbors, the embeddings of abnormal nodes ($\\hat{\\mathbf{h}}_i$ in Eq. (2)) would be optimized towards being further away from the neighbors' embeddings. However, Eq. (5) would also enforce the $\\hat{\\mathbf{h}}_i$ to be close to the normal one $\\mathbf{h}_i$. These two directions seem to be contradictory to each other. \n\n(3) Joint optimization according to Eq. (7) does not make sense under this generative augmentation setting. Here we use a generative model to augment the training data. This therefore should be that the training model is fixed. Moreover, if we jointly optimize the anomaly detection term and the other two generative terms, it would lead to the gradient for anomaly detection leaks to classification. This is quite confusing to me and might need more clarification.\n\n(4) How many layers of the subgraphs are used in optimizing the affinity score? If we use 2-hop neighbors, it might cause the computation to consider the significantly large number of nodes. If not, how should we decide on this parameter?\n\n(5) The comparison misses the baseline [1]\n\n[1] Liu, Kay, et al. \"Graph diffusion models for anomaly detection.\" (2024)." }, { "confidence": 3, "rating": 6, "review_id": "rA2IjZH4UJ", "review_text": "The paper proposes a novel approach called GGAD aimed at improving anomaly detection in graphs under a semi-supervised framework. GGAD generates pseudo anomaly nodes that serve as negative samples for training a one-class classifier. This method is built on two\nkey priors: asymmetric local affinity and egocentric closeness, which help in generating reliable outlier nodes that mimic real anomalies in terms of both graph structure and feature representation. Extensive experimental results demonstrate the effectiveness of the method across diverse graph anomaly detection datasets.\n\n1.The method is innovative. The proposed graph anomaly detection method can exploit the feature and structure information of normal nodes more effectively in the studied semi-supervised scenario compared to existing methods. The proposed two priors provide a meaningful characterization of desired properties of outliers in this semi-supervised setting and can be utilized to explore other beneficial priors further. \n\n2.The experiments in the paper are comprehensive and thorough.\n\n1. The model relies on prior knowledge to generate anomaly points. This prior knowledge can limit the model’s application scenarios. The model performs best only when the real anomalies align with this prior knowledge. For anomaly types that do not conform to the prior knowledge, the model may not effectively detect them.\n\n2.The model does not perform best on the Photo dataset in Table 1, and the article lacks an explanation of the results at the overall data level.\n\n3. This model employs a semi-supervised approach that uses some positive samples for training. However, it does not consider the issue of noise interference within the positive samples, namely, how the model overcomes interference when some positive samples are mislabeled.\n\n4. During the initialization step, only the initial feature of outliers are obtained while the connections between the outliers and normal nodes are not well illustrated in the paper. From Figure 2, one outlier is connected to more than one normal node while the feature of the outlier is generated according to single normal node. The neighborhood of outliers is important since the it involves the computation of node affinity score of outliers.\n\nsee weakness" }, { "confidence": 5, "rating": 5, "review_id": "25Yt6Bnugi", "review_text": "This paper introduces a novel generative-based GAD approach, named GGAD, tailored for the semi-supervised scenario. Unlike existing GAD frameworks, the authors highlight the feasibility and importance of a semi-supervised setting where labels for normal nodes are relatively easy to obtain during training, but labeled abnormal nodes are very limited. In this context, the paper proposes generating pseudo-anomaly nodes to serve as substitutes for real anomaly nodes in training, thus aiding in anomaly detection. These pseudo-anomalies are generated through two unique loss-guidance mechanisms. Experimental results demonstrate the effectiveness of GGAD.\n\nHowever, the description of the semi-supervised setting in this paper lacks clarity and unconvincing. Additionally, there is minimal differentiation between the proposed method and existing works that generate pseudo-anomaly samples for data augmentation. I think this paper's novelty is limited. I still think that doing unsupervised GAD is more necessary, and if the authors can prove that the pseudo-outlier proposed by GGAD can benefit unsupervised GAD as a general module, I can up my score.\n\n1.The complete experiment shows the effectiveness of the method and the necessity of each component.\n\n2.Some visual illustrations help the reader understand, although the shapes of the images seem to be compressed.\n\n1. I am still confused about the motivation for performing semi-supervised GAD. Why do most methods emphasize unsupervised scenarios? The cost of labeling normal nodes seems too expensive, as the authors themselves state on lines 268 to 269, yet they assert again on line 31 that labels for normal nodes are easy to obtain.This inconsistency hinders a clear understanding of the necessity and practical applications of semi-supervised GAD, which significantly undermines the motivation for this work.\n\n2. While the first loss function proposed by the authors appears intuitively valid, the second loss function aims to generate outliers similar to normal nodes. In my opinion, optimizing these two losses together is unreasonable because they conflict with each other. It seems that they should correspond to different outlier generation processes\n\n3. The paper validates the improvement of unsupervised GAD using labeled normal nodes and claims that GGAD remains superior. I think the authors ignore the fact that unsupervised methods do not obtain this outlier like GGAD and this comparison is not reasonable.\n\n1. why semi-supervised GAD is more important than unsupervised GAD, How do you overcome the labeling cost?\n2. If unsupervised GAD methods use outliers in GGAD, is it beneficial for them?\n3. why Eq.5 need Gaussian noise?\n4.In addition to the outlier generation methods mentioned on lines 376-396 (they seem overly simplistic), are there more advanced methods for generating outliers similar to GGAD? How does GGAD compare to them?" }, { "confidence": 3, "rating": 5, "review_id": "oNvnPj5Plf", "review_text": "This paper explores the problem of semi-supervised graph anomaly detection (GAD), where some nodes are known to be normal, in contrast to the typical unsupervised setting with no labeled data. The authors show that even a small percentage of labeled normal nodes can improve the performance of existing unsupervised GAD methods when adapted to the semi-supervised scenario. The paper proposes a novel Generative GAD approach (GGAD) to better exploit normal nodes by generating pseudo anomaly nodes, called 'outlier nodes', to provide effective negative samples for training a one-class classifier. GGAD generates these outlier nodes using priors about anomaly nodes, such as asymmetric local affinity and egocentric closeness, to mimic anomalies in structure and features. Experiments on six real-world GAD datasets show that GGAD outperforms state-of-the-art methods in both unsupervised and semi-supervised settings.\n\n+ This paper studies a new problem of semi-supervised GAD that has not been widely studied. \n\n+ The proposed method is simple and effective from the empirical perspective.\n\n+ The experiments are extensive including effectiveness and efficiency analyses and the method has been tested on real-world large-scale graphs to verify the scalability.\n\n- The two priors that are used to generate outlier nodes are heuristic or based on empirical evidence. There is no theoretical analysis provided to better guarantee the effectiveness of the proposed method.\n\n- It will be more interesting and helpful to show the generated outlier nodes can capture the characteristics of anomalous nodes in addition to comparing their representations.\n\n- The experimental settings of anomaly contamination are not very clear: how the contamination is introduced?\n\n- Overall experimental settings. What hardware has been used in the experiments, e.g., memory, and why are the experiments conducted on CPUs?\n\n1. Theoretical analysis of the proposed method, especially these two priors.\n\n2. Experimental settings including hardware and anomaly contamination.\n\n3. Analysis of the generated outlier nodes." }, { "confidence": 4, "rating": 7, "review_id": "JVY0ZfV1dW", "review_text": "The paper studies an under-explored graph anomaly detection problem where the detection models have access to a set of labeled normal nodes. To tackle this problem, it introduces a generative approach namely GGAD that generates pseudo anomaly nodes, called outlier nodes, to support the training of a discriminative one-class classifier. The key idea underlying this approach is to generate the outlier nodes in a way that can well simulate real anomaly nodes in both graph structure and feature representation perspectives. To achieve this, GGAD defines and incorporates two priors, including asymmetric local affinity and egocentric closeness, into its optimization objectives, with the former prior focusing on the alignment on the graph structure aspect and the latter on the feature representation aspect. The method is evaluated on six large real-world datasets and shows impressive detection performance compared to existing state-of-the-art methods.\n\n- The paper is generally well-written and easy-to-follow.\n- The problem setting is practical since labeled normal samples are easy to obtain in many real-world applications. Compared to the commonly studied unsupervised setting, this semi-supervised setting often results in better detection performance.\n- The proposed method GGAD is novel. There have been many generative anomaly detection methods, but as far as I know, they are unable to consider the graph structure and the neighboring nodes’ representations. By introducing the two new priors, GGAD addresses this issue well. Fig.1 and Fig. 3 help demonstrate this effect.\n- The method is compared with a range of unsupervised and semi-supervised methods on 6 real-world datasets with diverse genuine anomalies, and gains largely improved detection performance over these competing methods.\n- The ablation study is plausible and justifies the contribution of each proposed prior.\n\n- The outlier node generation in GGAD may cause non-trivial computational overhead.\n- Despite better performance than the competing methods, GGAD gains an AUC of only around 0.6 on some datasets, such as DGraph and Reddit.\n- In Fig. 4 (b), GGAD shows a fast AUPRC growth with increasing training size, but the other methods have a flat performance trend. What would be the reason behind?\n\nSee the weakness" } ]
zpw6NmhvKU
RashomonGB: Analyzing the Rashomon Effect and Mitigating Predictive Multiplicity in Gradient Boosting
The Rashomon effect is a mixed blessing in responsible machine learning. It enhances the prospects of finding models that perform well in accuracy while adhering to ethical standards, such as fairness or interpretability. Conversely, it poses a risk to the credibility of machine decisions through predictive multiplicity. While recent studies have explored the Rashomon effect across various machine learning algorithms, its impact on gradient boosting---an algorithm widely applied to tabular datasets---remains unclear. This paper addresses this gap by systematically analyzing the Rashomon effect and predictive multiplicity in gradient boosting algorithms. We provide rigorous theoretical derivations to examine the Rashomon effect in the context of gradient boosting and offer an information-theoretic characterization of the Rashomon set. Additionally, we introduce a novel inference technique called RashomonGB to efficiently inspect the Rashomon effect in practice. On more than 20 datasets, our empirical results show that RashomonGB outperforms existing baselines in terms of improving the estimation of predictive multiplicity metrics and model selection with group fairness constraints. Lastly, we propose a framework to mitigate predictive multiplicity in gradient boosting and empirically demonstrate its effectiveness.
https://openreview.net/pdf/838fbeed0eab05add105305af9fefdf722fe747f.pdf
[ { "confidence": 4, "rating": 6, "review_id": "tcA0QhNUXj", "review_text": "This paper proposes a method (RashomonGB ) to estimate the Rashomon sets/predictive multiplicity of gradient boosting models. It estimates multiple ($m$) models at each stage (effectively performing a local exploration) and then combine all such models in the end to construct $m^T$ models for Rashomon set computation, where $T$ is the number of iterations of the boosting. On several datasets the paper shows that RashomonGB performs better than re-training with $m$ seeds, in that at the fix $\\epsilon$ (loss difference) level, RashomonGB tends to show more predictive multiplicity.\n\nPredictive multiplicity is an important topic. The paper is generally clear and well-written. The proposed method is a sensible first method for boosting algorithms, which was previously underexplored. I think the proposed method is likely adopted by people who care about this problem as it's intuitive and easy to implement.\n\n1. The current exploration strategy is fast to compute, but I'm not sure if this follows the motivation of Rashomon set very well. While the authors mention one example on the Contraception dataset where re-training underestimates the predictive multiplicity, in general RashomonGB might create models that are more correlated than normal (because the \"backbone\" is the same GB model), thus underestimating the predictive multiplicity. Right now, the conclusion shows otherwise probably because the number of re-training is too small. \n\n2. Regarding the experiment, if I read this correctly, currently we use more compute for RashomonGB as well (by combining different weak models), so it is also not quite a fair comparison in my opinion. I would be very interested to see some estimate of how much compute RashomonGB saves against re-training, by running more re-training and see when are the metrics in Fig3 in the two methods become comparable.\n\n\n\nminor: one \"RashomonGB\" in L290 should be \"re-training\".\n\n1. What's $\\epsilon_{t_1}$ (and $\\epsilon_{t_2}$) in L243-L244? Isn't epsilon a quantity set by the user? \n\n\n2. In L282-283, do we construct 10 final models and 1024 for re-training and RashomonGB, respectively? If only 2 out of $m$ models are used why train $m$ of them (L282-283) for RashomonGB? \n\n3. Related to the above, I originally thought there is a model \"filtering\" step in each iteration $t$, and wonder how $\\epsilon_t$ is set for each iteration. However, from L282-283 it seems like we just randomly pick a few models and brute-force combine all weak models for the final Rashomon set exploration. Could the authors clarify?\n\n4. Are Fig 4 measured on the test set? If so, then it's not clear how useful this is as we cannot choose models basing on test performance - did the authors try picking models on the frontier basing on the validation set and then plot this on the test set? Right now, due to the sheer number of final models generated by RashomonGB, it's unclear if the models with better trade-off are just lucky." }, { "confidence": 2, "rating": 5, "review_id": "0pO4zAVBB0", "review_text": "This paper presents an approach that compute Rashomon set for gradient boosting algorithm where the set can be obtained through products over weak learners at each step rather than sampling them through retraining. The authors further proposed a dataset related Rashomon bound through sub-Gaussian assumption, where mutual information between hypothesis space and dataset shows the predictive multiplicity, which can further decomposed into model uncertainty and quality of data. Experiments show the proposed solution offers more models in Rashomon set than retraining given the same computation budget.\n\nThe rough idea of the proposed approach is straightforward since decomposing Rashomon set search on boosting algorithm can be a \"standard\" operation given the unique residual learning property of boosting algorithms. The novelty of the proposed approach is probably more from \"our work is the first to explore the Rashomon effect for gradient boosting\".\n\nThe dataset related Rashomon set bound seems an interesting point. But it needs some justification for the key assumption of it (sub-Gaussian). Proposition 2 seems make sense given the positive relation between number of boosting iterations and Rashomon set (also for dataset size).\n\nExperiments in 4.2 seem interesting. I would love to see more experiments like it.\n\nI got some difficult time to understand the introduction and abstract of this paper even I have read some literatures about Rashomon effect and predictive multiplicity. It is simply hard to read given the narrative there. Especially the second paragraph of introduction; it gets me confused and self-questioning my understanding of Rashomon effect from other works.\n\nWhy boosting algorithms? \nFurther justification about the dataset related Rashomon set bound?" }, { "confidence": 4, "rating": 7, "review_id": "ouYKGioKR8", "review_text": "The paper studies the Rashomon effect in gradient boosting, a commonly used algorithm for tabular datasets, but something that has not received enough attention in multiplicity literature. The paper provides several theoretical discussions on the size of the Rashomon set and the impact of the number of iterations on multiplicity in GBRTs. Furthermore, the paper proposes RashomonGB, a method to create an exponential number of ‘near-optimal models’ by training only a polynomial number of models. With more models in the Rashomon set, the use of RashomonGB can create several downstream benefits without any extra cost of training, shown empirically by the authors.\n\n- Multiplicity in GBRTs, or generally any gradient-boosting algorithm, has not been studied in the literature, and so the authors provided a novel discussion, especially given the importance of these algorithms in tabular settings.\n- The paper provides several theoretical discussions backed by empirical support. The insights on the growing Rashomon set with iterations were quite interesting, although I have concerns about the validity of these insights (see Weaknesses).\n- Multiplicity quantification can be quite costly, and various methods in pursuit of reducing this cost can significantly benefit further auditing. The use of RashomonGB, as proposed by the authors, can be an important step in that direction for gradient-boosted algorithms.\n\n- While the presentation of the rest of the concepts and the theoretical discussion were easy to follow, important details about the RashomonGB method and the details of the empirical setup were either missing (even from the Appendix) or imprecise. For instance, the Rashomon set of the gradient boosting algorithm isn’t going to simply be the iterative extension of Rashomon sets at every residual level, i.e., equation 4 is imprecise. Similarly, it seems that the epsilon value of the Rashomon set increases with more iterations, and thus it is confusing to me whether the insight that more iterations create bigger Rashomon sets is a result of multiple iterations or simply a result of bigger epsilon. See the section ‘Questions’ for more detailed comments and some follow-up questions. Edit after rebuttal: Acknowledged, correct and clarified.\n- There are other methods to measure predictive uncertainty in gradient-boosted algorithms. Some examples based on a cursory search (there might be more, as I’m not too familiar with GBRTs) - https://arxiv.org/abs/2205.11412 https://arxiv.org/pdf/1910.03225 https://arxiv.org/abs/2106.01682 -
While I understand that prediction uncertainty is not the same as predictive multiplicity, the two are closely related, and when proposing a better method to measure multiplicity, the paper should compare itself with other stronger baselines than just retraining. Just as previous works have proposed using Monte Carlo Dropout (which was initially created as a method to measure uncertainty) as a measure of multiplicity, uncertainty measurement baselines for GBRTs could have been adopted to create reasonable baselines, and would have made the results a lot stronger. Edit after rebuttal: Acknowledged and added.\n\nMy questions and comments mostly revolve around the RashomonGB formulation.\n- I don’t believe equation 4 is correct. A model formed from residual models that are present in their Rashomon sets at every step does not necessarily make a model that will be present in the Rashomon set overall. That’s because the composition of GBRTs occurs at the prediction level, while Rashomon sets are defined by the authors at the loss level. Equation 4 probably would have been true if the loss function had a linear relationship with the model predictions, which is not an assumption I see being made anywhere in the paper. This also makes me question the empirical results, because if the RashomonGB formulation isn’t precise, do the models across which the authors calculate multiplicity even belong to the same Rashomon set? Edit after rebuttal: Acknowledged and corrected.\n- Can the authors comment on why they compare two situations with different Rashomon parameters and make claims on their multiplicity? For example, Proposition 3 and the following paragraph. A Rashomon set would of course be bigger with a larger value of epsilon, and having that variability when talking about other trends doesn’t seem convincing to me. Edit after rebuttal: Confusion clarified.\n- What was the exact epsilon value used for the experiment? I couldn’t find it anywhere in the paper. Moreover, I hope that given the Rashomon sets for the RashomonGB setup were defined with T*epsilon as the new epsilon value, the same freedom was also given to retraining. Again, if the comparison was done across methods with different epsilon values (which might not be the case, but I don’t know the details), that does not make sense to me. Edit after rebuttal: Appropriate information added." }, { "confidence": 2, "rating": 6, "review_id": "8Elq8CwQT8", "review_text": "The paper explores the concept of predictive multiplicity in gradient boosting models. The Rashomon effect refers to the existence of multiple models that perform similarly well on a given dataset. The authors formalize this effect in the context of gradient boosting, introduce a new method called RashomonGB to efficiently explore this multiplicity, and demonstrate its application on various datasets. The paper aims to improve the estimation of predictive multiplicity and model selection, especially with considerations for group fairness.\n\n1. The introduction of RashomonGB represents a novel method for exploring the Rashomon set in gradient boosting, offering an exponential search space as opposed to traditional linear methods.\n2. The paper provides a robust theoretical foundation using statistical learning and information theory to analyze the Rashomon effect, enhancing the understanding of this phenomenon in gradient boosting.\n3. The authors demonstrate the practical utility of RashomonGB on a wide range of real-world datasets, including tabular and image data, showcasing its versatility and effectiveness.\n\n1. While the paper discusses the positive societal impacts of RashomonGB, it lacks a thorough exploration of potential negative impacts or misuse of the method.\n2. The theoretical analysis relies on several assumptions that may not hold in all practical scenarios, potentially limiting the generalizability of the findings.\n3. The paper mentions the intention to release code post-review, but the lack of immediate open access to code and data can hinder reproducibility and independent validation by other researchers.\n4. Implementing RashomonGB might be complex for practitioners without a strong background in the theoretical aspects of machine learning and gradient boosting, potentially limiting its adoption in the industry.\n\n1. Can the method be extended or adapted for other types of machine learning models beyond gradient boosting?\n2. How does the choice of hyperparameters in RashomonGB affect the stability and reliability of the results?\n3. What are the practical challenges faced during the implementation of RashomonGB, and how can they be addressed to facilitate broader adoption?" } ]
znBiAp5ISn
TAS-GNN: Topology-Aware Spiking Graph Neural Networks for Graph Classification
The recent integration of spiking neurons into graph neural networks has been gaining much attraction due to its superior energy efficiency. Especially because the irregular connection among graph nodes fits the nature of the spiking neural networks, spiking graph neural networks are considered strong alternatives to vanilla graph neural networks. However, there is still a large performance gap for graph tasks between the spiking neural networks and artificial neural networks. The gaps are especially large when they are adapted to graph classification tasks, where none of the nodes in the testset graphs are connected to the training set graphs. We diagnose the problem as the existence of neurons under starvation, caused by the irregular connections among the nodes and the neurons. To alleviate the problem, we propose TAS-GNN. Based on a set of observations on spiking neurons on graph classification tasks, we devise several techniques to utilize more neurons to deliver meaningful information to the connected neurons. Experiments on diverse datasets show up to 27.20% improvement, demonstrating the effectiveness of the TAS-GNN.
https://openreview.net/pdf/7ce7c8cc5374dbd6686b378ef8174a06b76e4183.pdf
[ { "confidence": 4, "rating": 6, "review_id": "3IYgelN3ZX", "review_text": "There's a large performance gap for graph tasks, especially graph classification tasks, between the spiking neural networks and artificial neural networks. The authors proposes the problems as the neuron's under starvation and illustrated the reason of the problem. To solve the problem, TAS-GNN was proposed.\n\nThe main contributions of the paper are as follows:\n1: Starvation problem of spiking neurone in GNNs in graph classification tasks are identified.\n\n2: A strategy was proposed to address the spike frequency deviations on the basis of the correlation between graph topology and spike frequency patterns.\n\nThe authors conduct experiments on 5 popular datasets and use several different designs of GNN layer. The results show competitive potential of the TAS-GNN.\n\n1:This is a well-written paper, from the formulation of the problem to the solution. The author's motivation for the use of graph topology is clear.\n\n2:The method of using topology-awaregroup-adaptive neurons shows competitive results compared with other baselines. The ablation study makes the result more persuasive. \n\n3: The Figures in the paper are quite straightforward, easy to follow.\n\n1: The name of the paper is \"Topology-Aware Spiking Graph Neural Networks\". However, as I can tell the only graph topology used in the method is nodes degree, which is used to group the neurons. I wonder if it is appropriate to name it as \"topology aware\", or the author can explain it more.\n\n2: The analysis regarding the performance of the method is lack of discussion. For instance, in some datasets, such as MUTAG and IMDB-Binary, the proposed method achieve quite competitive results while in PROTEINS it doesn't. It's betted to explain what cause the phenomenon, like the characteristics of the datasets? Also, in table 2, the results of GAT and GAT+TAG in IMDB-Binary are the same. It's better to make an explanation about them.\n\n3: There're several typos and basic grammar mistakes in the paper that will affect the presentation of the paper. In line 120 \" and apply is to\"; The sentence in line 123 is hard to understand\n\n1: In section 3 the authors mentioned the hypothesis that the phenomenon mentioned above is caused by the topology of the real-world graphs. What motivates you to have the hypothesis?" }, { "confidence": 5, "rating": 7, "review_id": "B4knkzsRSl", "review_text": "This paper primarily discusses integrating Spiking Neural Networks (SNNs) into Graph Neural Networks (GNNs) to address several key challenges in graph classification tasks. Specifically, the paper proposes a new method called TAS-GNN (Topology-Aware Spiking Graph Neural Networks) which leverages the topology of graphs to improve the performance of spiking neural networks in graph classification tasks.\n\n(1)The authors clearly articulate the performance gap between existing Graph Neural Networks (GNNs) and Spiking Neural Networks (SNNs) in graph classification tasks.\n(2)The authors conduct an in-depth analysis of the performance degradation of spiking neural networks in graph classification tasks and introduce the \"neuron starvation\" problem.\n(3)The authors propose topology-aware group-adaptive neurons (TAG) based on the graph's topology, a novel approach that helps address the neuron starvation issue.\n(4)The authors provide a detailed description of how to convert input graphs into spike representations, perform message passing, and classify the graphs.\n(5)The authors validate the method's generalizability and effectiveness by using multiple public datasets (such as MUTAG, PROTEINS, ENZYMES, NCI1, IMDB-BINARY) in the experimental section.\n\n(1)The authors mention several application areas and challenges, but the references and comparisons to existing literature are not sufficiently comprehensive.\n(2)Although the methodology section describes the main steps, it lacks detailed descriptions of some key aspects such as threshold initialization and the specific training process.\n(3)Although there are some ablation studies, the analysis of the individual contributions of each component is insufficient, making it difficult to determine the specific impact of each component on the overall performance improvement.\n\n(1)Could you provide more details on how the neuron starvation problem was diagnosed? Specifically, what metrics or observations were used to identify this issue in SNNs for graph classification?\n(2)The paper mentions the use of learnable initial thresholds for neurons. Could you elaborate on how these initial values are set and what specific strategies or heuristics were used to determine them?\n(3)Conduct a more thorough ablation study to analyze the independent contributions of each component (e.g., TAG, learnable initial thresholds) to the overall performance. This will help readers understand the significance of each part of the proposed method.\n(4)The sensitivity analysis shows variations in performance with different initial thresholds and learning rates. Could you explain why certain thresholds or learning rates were more effective and how they were chosen?\n(5)How does TAS-GNN scale with very large graphs in terms of computational efficiency and memory usage? Are there any specific optimizations or techniques used to handle large-scale datasets?\n(6)While the paper compares TAS-GNN with several baseline methods, could you consider including comparisons with more recent or advanced GNN models that have shown strong performance in graph classification tasks?\n(7)Have you tested TAS-GNN on any real-world applications or datasets beyond the ones mentioned? If so, could you share the results and insights gained from these experiments?" }, { "confidence": 4, "rating": 3, "review_id": "QplC2giKwy", "review_text": "The paper presents a novel approach called TAS-GNN (Topology-Aware Spiking Graph Neural Networks) to address the performance gap between spiking neural networks (SNNs) and artificial neural networks (ANNs) in graph classification tasks. The authors identify a \"starvation\" problem in spiking neurons within GNNs, where many neurons do not emit any spikes during inference, leading to severe information loss. This problem is more pronounced in graph classification tasks, where the test set graphs are independent from the training set, unlike in transductive or inductive learning settings.\n\n1.\tThis paper identifies a critical \"starvation\" problem in spiking neurons within Graph Neural Networks (GNNs), where many neurons do not emit any spikes during inference, leading to severe information loss. This problem is more pronounced in graph classification tasks, where the test set graphs are independent from the training set\n2.\tThe paper proposes a novel approach called TAS-GNN (Topology-Aware Spiking Graph Neural Networks) to address the graph classification problem.\n\n1.\tThe authors use the node degree instead of the concept of topology, there’s a large gap between the graph topology and node degree.\n2.\tThe authors solve the graph classification task as a contribution, which is not a significant challenge for spiking graph neural networks.\n3.\tThe advantage of Spiking Neural Networks (SNN) is their low energy consumption. However, the paper does not mention the feature, so it is unclear why graph neural networks should be combined with SNN. The motivation behind TAS-GNN is not clear.\n\nThe important points listed in weakness 1-3." }, { "confidence": 4, "rating": 6, "review_id": "iP8GDzFwhc", "review_text": "This paper proposes topology-aware spiking graph neural networks with adaptive thresholds based on a group of neurons for graph classification. The paper first diagnoses the poor performance as the existence of neurons under starvation caused by the graph structure. Then the paper proposes the adaptive threshold among neurons partitioned by degrees, as well as the learnable initial threshold and decay rate to reduce the sensitivity. Experiments on several datasets show superior performance of the proposed method.\n\n1. This paper proposes the first SNN design to target graph classification.\n\n2. This paper identifies the starvation problem and proposes a novel topology-aware group-adaptive technique.\n\n3. Experiments show superior performance on several datasets, some outperforming ANNs.\n\n1. The proposed method seems to be a hybrid ANN-SNN model rather than a pure SNN design. The paper did not discuss how this will affect the deployment of the model on potential neuromorphic hardware, since SNNs mainly target those hardware to obtain energy efficiency.\n\n2. The paper did not discuss the (theoretical) energy efficiency estimation, which is a major motivation for considering SNNs as stated in Introduction.\n\n3. Or if the motivation is to get models with better performance than ANN, then Table 1 does not include state-of-the-art ANN results for comparisons.\n\nSome recent works also study SNN for link prediction tasks in graphs [1] besides node-level classification, which may be discussed.\n\n[1] Temporal Spiking Neural Networks with Synaptic Delay for Graph Reasoning. ICML 2024." } ]
zn6s6VQYb0
GraphCroc: Cross-Correlation Autoencoder for Graph Structural Reconstruction
Graph-structured data is integral to many applications, prompting the development of various graph representation methods. Graph autoencoders (GAEs), in particular, reconstruct graph structures from node embeddings. Current GAE models primarily utilize self-correlation to represent graph structures and focus on node-level tasks, often overlooking multi-graph scenarios. Our theoretical analysis indicates that self-correlation generally falls short in accurately representing specific graph features such as islands, symmetrical structures, and directional edges, particularly in smaller or multiple graph contexts.To address these limitations, we introduce a cross-correlation mechanism that significantly enhances the GAE representational capabilities. Additionally, we propose the GraphCroc, a new GAE that supports flexible encoder architectures tailored for various downstream tasks and ensures robust structural reconstruction, through a mirrored encoding-decoding process. This model also tackles the challenge of representation bias during optimization by implementing a loss-balancing strategy. Both theoretical analysis and numerical evaluations demonstrate that our methodology significantly outperforms existing self-correlation-based GAEs in graph structure reconstruction.
https://openreview.net/pdf/57096dd4679d0699198e3899786b24845b43c7a8.pdf
[ { "confidence": 4, "rating": 5, "review_id": "JVFBYcSJ2e", "review_text": "This paper proposes a cross-correlation autoencoder for graph structural reconstruction. The authors first analyze the problems of existing self-correlation encoder. Then, a cross-correlation autoencoder is designed. Experimental results show the effectiveness of the cross-correlation autoencoder.\n\n1. The motivation is clear and the cross-correlation autoencoder is reasonable.\n2. The paper is well-written and easy to follow.\n3. The experiments are comprehensive.\n\n1. The authors mention that the current self-correlation methods can not address specific (sub)graph structures. But this paper only presents an overall experimental performance. It is unclear how the proposed cross-correlation autoencoder performs given a specific graph structure. \n\n2. It is not clear whether the graph dataset used in the paper is a directed or undirected graph. Since the cross-correlation autoencoder can represent the directed graph effectively, it is suggested to consider the directed graph dataset.\n\n3. More different architectures of the encoder and decoder should be employed to further verify the effectiveness of the cross-correlation mechanism.\n\nsee Weakness." }, { "confidence": 4, "rating": 6, "review_id": "wozjB4vJhN", "review_text": "This paper proposed a method to address the limitations of existing graph autoencoder (GAE) models that primarily rely on self-correlation for graph structure representation. They claim existing GAE often fail to accurately represent complex structures like islands, symmetrical structures, and directional edges, particularly in smaller or multiple graph contexts. The proposed model, GraphCroc, introduces a cross-correlation mechanism that aims at enhancing the representational capabilities of GAEs. It employs a mirrored encoding-decoding process to ensure robust structural reconstruction and introduces a loss-balancing strategy to tackle representation bias during optimization.\n\n1. The idea to introduce two latent space for reconstructing the graph structure is \"simple and intuitive\". \n\n2. The writing is clear and easy to follow.\n\n3. The experimental results are sound.\n\n1. This paper lacks discussion on related works. There already exists some works trying to solve the graph autoencoder structure recovering issues. For example, including position encoding [1] or adding extra node labels [2]. How the proposed method is compared with these methods, from the perspective of effectiveness and efficiency?\n\n[1] You, Jiaxuan, Rex Ying, and Jure Leskovec. \"Position-aware graph neural networks.\" International conference on machine learning. PMLR, 2019.\n\n[2] M. Zhang, P. Li, Y. Xia, K. Wang, and L. Jin, Labeling Trick: A Theory of Using Graph Neural Networks for Multi-Node Representation Learning, Advances in Neural Information Processing Systems (NeurIPS-21), 2021.\n\n2. As the proposed method generate two latent embeddings, I wonder if there exists some techniques to control them to be different with each other? Otherwise I am concerned that whether the two embeddings could converge to each others.\n\nsee above weakness" }, { "confidence": 4, "rating": 5, "review_id": "ZjrWIOhtku", "review_text": "This paper theoretically analyzes the limitations of existing graph autoencoders (GAE) in representing special graph features such as islands, symmetrical structures, and directional edges. To address this, the paper proposes a new GAE method, GraphCroc, which employs a cross-correlation mechanism that significantly enhances the representational capabilities of GAEs.\n\n1. The paper clearly shows the limitations of existing GAEs through theoretical analysis.\n\n2. The experimental results demonstrate the advantages of the proposed method in structural reconstruction and graph classification tasks.\n\n3. The paper is easy to follow.\n\n1. In Table 1, the improvements of GraphCroc are evident only on two datasets.\n\n2. While the proposed cross-correlation method performs better than the general self-correlation method on island, symmetric structures, and directed graphs, it would be beneficial to include more results in reconstruction visualization, particularly regarding island or directed edge reconstruction.\n\n3. Some related works [1] need to be discussed.\n\n[1] Liu, Chuang, et al. \"Where to Mask: Structure-Guided Masking for Graph Masked Autoencoders.\" arXiv preprint arXiv:2404.15806 (2024).\n\n1. How about the performance of the proposed method on directed graphs?" } ]
zm1LcgRpHm
Segment, Shuffle, and Stitch: A Simple Layer for Improving Time-Series Representations
Existing approaches for learning representations of time-series keep the temporal arrangement of the time-steps intact with the presumption that the original order is the most optimal for learning. However, non-adjacent sections of real-world time-series may have strong dependencies. Accordingly, we raise the question: Is there an alternative arrangement for time-series which could enable more effective representation learning? To address this, we propose a simple plug-and-play neural network layer called Segment, Shuffle, and Stitch (S3) designed to improve representation learning in time-series models. S3 works by creating non-overlapping segments from the original sequence and shuffling them in a learned manner that is optimal for the task at hand. It then re-attaches the shuffled segments back together and performs a learned weighted sum with the original input to capture both the newly shuffled sequence along with the original sequence. S3 is modular and can be stacked to achieve different levels of granularity, and can be added to many forms of neural architectures including CNNs or Transformers with negligible computation overhead. Through extensive experiments on several datasets and state-of-the-art baselines, we show that incorporating S3 results in significant improvements for the tasks of time-series classification, forecasting, and anomaly detection, improving performance on certain datasets by up to 68\%. We also show that S3 makes the learning more stable with a smoother training loss curve and loss landscape compared to the original baseline. The code is available at https://github.com/shivam-grover/S3-TimeSeries.
https://openreview.net/pdf/d5ba68bdf83d04632580f0b9e7ac80199a8c19c5.pdf
[ { "confidence": 4, "rating": 5, "review_id": "WUjKloq9SX", "review_text": "This paper introduces a new method for time-series representation learning that enhances the modeling of non-adjacent segment dependencies. Specifically, the proposed method segments, shuffles in a learned manner and stitches the shuffled segments to combine with original time series. The proposed method is model-agnostic without adding significant parameter overhead and shows performance improvement across multiple classification and forecasting base models.\n\n1. The proposed method permutes the original segments to better capture inter-relations between distant segments. It is model-agnostic and introduces minimal parameter overhead to the original model.\n\n2. Extensive experiments on various base models for both classification and forecasting tasks demonstrate the effectiveness of the proposed method.\n\n1. It it not clear how the sorting process, specifically the calculation of permutation $\\sigma$ from $P$, is made differentiable.\n\n2. The compared forecasting baselines such as Informer are no longer state-of-the-art methods. Adding more recent baselines such as Time-LLM, GPT4TS, DLinear, PatchTST would provide a clearer understanding of the proposed method's comparative benefits.\n\n3. The basic assumption for S3 is that modeling non-adjacent dependencies is important. However, the paper lacks detailed case studies that demonstrate the specific types of non-adjacent dependencies effectively captured by S3, which are not addressed by existing models. Additionally, there is no case study to validate that the learned shuffling weights accurately represent these segment dependencies.\n\n1. The results in Tables 1, 2, and 3 seem to indicate more significant improvements in multivariate than in univariate time series tasks. Any reason behind this?\n\n2. What does the \"number of segments\" represent in Figure 6 and Figure A3? Is it the number of segments for the first layer or the final layer? If it refers to \"n\", then in Figure A3, this number seems to perform the best when it is larger than 100 for some datasets?\n\n3. Could you describe the inference process for the S3 method? Additionally, what are the computational overheads for training and inference times for S3?" }, { "confidence": 3, "rating": 4, "review_id": "Kb7xCYTdD8", "review_text": "This paper introduces a plug-and-play mechanism called Segment, Shuffle, and Stitch (S3) designed to enhance time-series representation learning in existing models. S3 operates by dividing the original sequence into non-overlapping segments and shuffling them in a learned manner that is optimal for the given task. It then reattaches the shuffled segments and performs a learned weighted sum with the original input to capture both the newly shuffled sequence and the original sequence. This proposed model can enhance the performance of specific models in classification and prediction tasks.\n\nThe paper is easily comprehensible and straightforward.\n\nSufficient experiments are conducted to confirm the effectiveness of the method.\n\nLack of comparative methods:\nIn fact, the proposed method seems to share the same spirit as data augmentation methods in the time series field[1-4]. Why hasn't any data augmentation method been compared?\n\n\nSelection of baseline models:\nThe selected baseline model, Informer, seems somewhat outdated. Why not choose a more recent model, e.g., iTransformer[5] or PatchTST[6]?\n\n\nDataset for prediction task:\nThe author conducted experiments on three ETT datasets, but for prediction tasks, more datasets should be considered, e.g., traffic, electricity, and weather.\n\n\nTime-Series Representation Claim:\n As the author pointed out, more tasks should be considered for time series representation learning.\n\n\n[1]FRAUG: FREQUENCY DOMAIN AUGMENTATION FOR TIME SERIES FORECASTING [2]Time Series Data Augmentation for Deep Learning: A Survey [3]SimPSI: A Simple Strategy to Preserve Spectral Information in Time Series Data Augmentation [4]TOWARDS DIVERSE AND COHERENT AUGMENTATION FOR TIME-SERIES FORECASTING [5]ITRANSFORMER: INVERTED TRANSFORMERS ARE EFFECTIVE FOR TIME SERIES FORECASTING [6]A TIME SERIES IS WORTH 64 WORDS: LONG-TERM FORECASTING WITH TRANSFORMERS\n\nWhat are the essential differences between the proposed method and other data augmentation methods?" }, { "confidence": 5, "rating": 8, "review_id": "PQ6MFEkGOn", "review_text": "This paper proposes a new neural network design element which segments, shuffles, and stitches time series for improved representation learning. They evaluate their methods on forecasting and classification tasks, and show that S3 benefits some widely used baselines.\n\n1. To the best of my knowledge, the idea is novel, and fundamentally challenges and changes how to learn representations for time series data\n2. The paper is well written and easy to follow\n3. Experiments are well-designed, and results are promising\n\nI have not found any major weaknesses in the methodology or experimental design. However, I think that the paper might benefit from showing what the S3 module is actually learning. For example, the authors can include the segmented, shuffled, and stitched time series on a particular dataset as an example, along with the weighted time series (used as input to the model), and the original time series. This might provide some intuition as to how this design element improves predictive performance. \n\nI think there's always scope to improve experimental design. TS2Vec is a excellent choice for classification, but not for forecasting. I would recommend that the authors use methods such as PatchTST (transformer-based) or iTransformer, TimesNet (CNN-based), N-BEATs or N-HITS (MLP-based) etc. for time series forecasting. For classification, it would also be good to compare with fully supervised methods such as ResNet1D (see [1]). \n\n### References\n[1] Ismail Fawaz, Hassan, et al. \"Deep learning for time series classification: a review.\" Data mining and knowledge discovery 33.4 (2019): 917-963.\n\nI do not have questions per se, but I am listing some things that I am curious about below:\n\nI would also encourage the authors to evaluate the benefits of S3 on some recent time series foundation models such as MOMENT [2], Chronos [3], Moirai [4], TimesFM [5], and/or LagLLama [6]. The MOMENT model does both classification and forecasting, so it might be interesting to see how S3 benefits pre-trained models, say by just training the S3 layer and freezing the pre-trained backbone (or some variation of this experiment).\n\nOn a similar note, I wonder if S3 improves generalization and hurts memorization, or vice versa. It would be interesting to do some transfer learning experiments where you train on some time series data and evaluate the model on other time series data (see MOMENT or PatchTST for inspiration). \n\n### References\n[2] Goswami, Mononito, et al. \"Moment: A family of open time-series foundation models.\" arXiv preprint arXiv:2402.03885 (2024).\n[3] Ansari, Abdul Fatir, et al. \"Chronos: Learning the language of time series.\" arXiv preprint arXiv:2403.07815 (2024).\n[4] Woo, Gerald, et al. \"Unified training of universal time series forecasting transformers.\" arXiv preprint arXiv:2402.02592 (2024).\n[5] Das, Abhimanyu, et al. \"A decoder-only foundation model for time-series forecasting.\" arXiv preprint arXiv:2310.10688 (2023).\n[6] Rasul, Kashif, et al. \"Lag-llama: Towards foundation models for time series forecasting.\" arXiv preprint arXiv:2310.08278 (2023)." }, { "confidence": 4, "rating": 6, "review_id": "RhsdsSBMVs", "review_text": "The paper paper introduces a new approach called Segment, Shuffle, and Stitch (S3) to enhance time-series representation learning. The method involves segmenting the time-series into non-overlapping parts, shuffling them optimally, and stitching them back together along with the original sequence.\n\nKey contributions include:\n\n- Proposing the S3 mechanism to improve time-series representation learning by dynamically reordering segments.\n- Demonstrating that S3 can be integrated with existing neural architectures like CNNs and Transformers, resulting in significant performance improvements.\n- Showing through extensive experiments that S3 enhances performance in time-series classification and forecasting tasks, with improvements up to 68%.\n\n- Code is available, making reproducing this paper easier.\n- Paper is clear.\n- Results appear good, when considered on the set of baselines and dataset picked by the authors.\n\n- Tables 1 and 2 focus on the ETT datasets, which are only a (highly intra-correlated) subset of the common forecasting datasets: Electricity, Traffic, Weather, Illness...\n- I see no mention of CoST in the results tables, despite being cited in the paper. This is usually a very strong baseline for contrastive approaches. Including it would certainly paint a more complete picture of the results landscape. On a related note this also applies to e.g. more recent transformer baselines. Informer is relevant, but also very far from state of the art.\n- Error bars would help one better contextualize the results.\n- The lack of an ablation study makes understanding the reason this works more complicated.\n\n- The 3 points in weaknesses are also questions in the sense that they ask for some new experiments to be performed. Addressing those points would be my first recommendation.\n- Intuitively, it feels like this work is to some extent a form of bootstrap (as data augmentation) combined with a mixup-like sample interpolation. I may be wrong on this and am happy to discuss. If so, could the authors do more of an ablation study connected to this. I.e. how does the approach outperform other (non-permutation)-based data augmentation strategies combined with the same summation operation?\n\nEdit: I have read the author's rebuttal. They have addressed questions I had and I am as a result raising my score to a 6." }, { "confidence": 4, "rating": 6, "review_id": "Lhb1G9uQox", "review_text": "The paper introduces a simple but effective differentiable module that performs pre-processing to input multivariate time-series before being fed into any differentiable model for arbitrary task. The pre-processing involves segmenting, shuffling the segments and stiching them together. The novelty include making this seemingly discrete operations into a differentiable module. This simple idea yields significant improvement in performance of different kinds of models over variety of datasets.\n\n1. The method is simple and easy to add to most deep learning models\n2. The technical details are well-motivated and explained\n3. The method also improves training efficiency and convergence time along with performance with very little increate in model complexity\n4. Experimental results across different tasks are strong\n\n1. Visualization and any qualitative study on the shuffling and segments generalted by S3 would greatly benefit the readers.\n2. How well does it optimize transformer based models, especially those that already do segmentation like PatchTST since the attention module captures the relations all pairs of segments already?\n3. Does the representations due to S3 generalize to multiple tasks at a time or do we need to retrain for each task?\n\nSee weaknesses" } ]
zlgfRk2CQa
Rethinking Deep Thinking: Stable Learning of Algorithms using Lipschitz Constraints
Iterative algorithms solve problems by taking steps until a solution is reached. Models in the form of Deep Thinking (DT) networks have been demonstrated to learn iterative algorithms in a way that can scale to different sized problems at inference time using recurrent computation and convolutions. However, they are often unstable during training, and have no guarantees of convergence/termination at the solution. This paper addresses the problem of instability by analyzing the growth in intermediate representations, allowing us to build models (referred to as Deep Thinking with Lipschitz Constraints (DT-L)) with many fewer parameters and providing more reliable solutions. Additionally our DT-L formulation provides guarantees of convergence of the learned iterative procedure to a unique solution at inference time. We demonstrate DT-L is capable of robustly learning algorithms which extrapolate to harder problems than in the training set. We benchmark on the traveling salesperson problem to evaluate the capabilities of the modified system in an NP-hard problem where DT fails to learn.
https://openreview.net/pdf/0735617b982a5aca1dad5a07d887a2347d77d249.pdf
[ { "confidence": 3, "rating": 6, "review_id": "MnvTMTLcWc", "review_text": "To solve the stability of Deep Thinking models, this paper proposes to constrain activation functions to be Lipshitz-1 functions. The original DT and DT-R models have training stability problem, basically because of scale explosion or vanishing. The authors revealed the stability problem, attribute the problem to Lipschitz constants, proposed ways to ensure Lipschitz smoothness, and show the effectiveness of their approach through a few examples used in the original DT paper, as well as include the traveling salesman problem.\n\n* This paper is clearly written and well motivated. \n* The storyline is very reasonable: identify problems => propose ways to solve the problem => show the approach actually works\n* This approach is mathematically grounded.\n* Experiments are thorough, by running many random seeds and report error bars.\n\n* The idea is quite straight-forward (may not be a bad thing, but make technical contributions smaller)\n* In the TSP problems, DT-L's results seem worse than NN Tours and BNN Tours. At least some explanation is warranted. \n* I'm not fully convinced by the significance of this paper. The examples shown in the paper are quite toy. Are there more examples you expect DT-L would work?\n* I'd appreciate more visualizations that can intuitively show the benefits of DT-L over DT/DT-R? Maybe some Figures like in the original DT paper. \n* The title is not very informative. Might be better to mention Lipschitz smoothness in the title.\n\n* In Line 141-142, I don't quite get this comment \"Although any Lipschitz constant less than 1 would guarantee convergence, the nature of the problem solving mechanism we seek to learn intuitively means that we do not want fast convergence.\" Why don't we want faster convergence.\n* In Figure 6 left, it looks like DT-L is worse than DT-R? Why is that? More stability leads to worse performance?\n* What about DT and DT-R for TSP?" }, { "confidence": 4, "rating": 8, "review_id": "JseNGVVPPN", "review_text": "This paper identifies and rectifies an issue with a particular type of iterative neural network called Deep Thinking Networks. The problem arises in exploding latent representations and unstable training routines. The authors of this work propose an update to the architecture where they add Lipschitz constraints to the model. They show three major benefits: (I) The models train more stably/predictably; (II) the inference-time behavior is better as the latent representations converge with iterations of the recurrent model; and (III) this new approach can learn how to solve NP-Hard problems where the old methods fail.\n\n1. This paper is original to my knowledge. I am aware of much of the work on Deep Thinking Networks and the issues raised and the solutions proposed in this work are novel.\n1. The quality of the work is high. For the most part the experiments are done well and cover many natural questions that would arise from reading the abstract/intro.\n1. The clarity is good. I think the writing is clear and the results are compelling.\n1. The results are significant for those interested in easy-to-hard generalization. These Deep Thinking Networks have strong extrapolation of toy problems and with the proposed updates to the methods they show strong performance even for TSP solving.\n\n1. Clarity: A couple things could be more clear. \n i. I think IPT stands for Incremental Progress Training, but I don't see the acronym defined anywhere. \n ii. Table 1 the units are unclear. I gather there are tour lengths, but that isn't stated in the table or the caption. \n iii. The violin plot in Figure 2 is hard to parse (no harder than any other violin plot). This type of graphic does look nice, but offers little quantitative context. For example, there is no indication of the units/scale of the width of each violin. This is not the right type of plot for a conference paper.\n\n1. Can the authors make the clarifications needed to address my first two points in the Weaknesses section?\n1. Have the authors looked at transformer architectures at all? I'm not asking for results to be added to the paper, but I'm curious about how these techniques, which are independent from the parameterization of any given layer in some ways, might apply to modern large model architectures." }, { "confidence": 4, "rating": 6, "review_id": "EUnbVSCPjG", "review_text": "The paper addresses the positive feedback issue in the so called Deep Thinking networks, where the inference computation may involve more recurrent computations than encountered in training. The proposed solution is to normalise the state vector that undergoes the recurrence, i.e. make the mapping contractive, i.e. ensure negative (but just) feedback.\n\nThe paper is well written and clear to follow, the proposed method is pretty straight forward and effective.\n\nAs far as I can tell, it is pretty straight forward control theory stuff for addressing positive feedback. Nothing wrong with the proposed solution, but I would assume this is such a fundamentally well known issue in any recurrent/feedback system that we can leave this to be addressed by the designer at implementation time with any choice of normalisation. It is somewhat disappointing that with the proposed method there is still the need for batch normalisation.\n\nDoes batch normalisation alone not do a good job of stabilising the feedback?" }, { "confidence": 4, "rating": 6, "review_id": "gpRnDDIEof", "review_text": "The paper introduces Deep Thinking with Lipschitz Constraints (DT-L), an improved version of the Deep Thinking (DT) networks, designed to enhance the stability and performance of iterative algorithm learning models. The authors address the instability issues inherent in DT networks by analyzing intermediate representation growth and applying Lipschitz constraints. The DT-L model guarantees convergence to a unique solution and demonstrates robustness in learning algorithms that extrapolate to more complex problems. The paper furthermore benchmarks DT-L on the Traveling Salesperson Problem (TSP) as well other than the datasets used in the Deep Thinking models. It compares its performance against existing DT models.\n\n- Introducing Lipschitz constraints into the DT framework enhances the models' reasoning capabilities. This approach addresses instability issues in training and inference, offering theoretical guarantees for convergence.\n- DT-L demonstrates the ability to scale to larger problems effectively, maintaining stability and performance, which is crucial for real-world applications.\n- The comprehensive evaluation on various problem classes, including prefix sums, mazes, chess puzzles, and TSP, highlights the robustness and versatility of the DT-L model.\n- The paper provides a thorough analysis of the issues with DT networks and clearly explains how the proposed modifications address these problems.\n\n- The modifications and theoretical underpinnings of the DT-L model, such as the Lipschitz constraints and orthogonal transformations, add complexity to the model, which might hinder its adoption and understanding by a broader audience.\n- While the DT-L model shows improvement, its performance on the TSP is not impressive, indicating room for further optimization and refinement.\n\n- How does the introduction of Lipschitz constraints impact the computational complexity and training time of the DT-L model compared to traditional DT models?\n- Can the proposed DT-L model be extended to other types of iterative algorithms beyond the ones tested in this paper? If so, what modifications would be necessary?\n- Can this applied for transformer architectures like looped transformers?\n- Can the insights gained from this work be applied to improve the interpretability of the learned algorithms, making the decision-making process of the DT-L model more transparent?" } ]
zkhyrxlwqH
Unsupervised Homography Estimation on Multimodal Image Pair via Alternating Optimization
Estimating the homography between two images is crucial for mid- or high-level vision tasks, such as image stitching and fusion. However, using supervised learning methods is often challenging or costly due to the difficulty of collecting ground-truth data. In response, unsupervised learning approaches have emerged. Most early methods, though, assume that the given image pairs are from the same camera or have minor lighting differences. Consequently, while these methods perform effectively under such conditions, they generally fail when input image pairs come from different domains, referred to as multimodal image pairs. To address these limitations, we propose AltO, an unsupervised learning framework for estimating homography in multimodal image pairs. Our method employs a two-phase alternating optimization framework, similar to Expectation-Maximization (EM), where one phase reduces the geometry gap and the other addresses the modality gap. To handle these gaps, we use Barlow Twins loss for the modality gap and propose an extended version, Geometry Barlow Twins, for the geometry gap. As a result, we demonstrate that our method, AltO, can be trained on multimodal datasets without any ground-truth data. It not only outperforms other unsupervised methods but is also compatible with various architectures of homography estimators. The source code can be found at: https://github.com/songsang7/AltO
https://openreview.net/pdf/dbd7c26b2dae2f1c86abaa70a60fb6e9e683d675.pdf
[ { "confidence": 5, "rating": 3, "review_id": "hXC6dl8P6M", "review_text": "The paper proposes an unsupervised homography estimation method for multimodal image pairs using an alternating optimization approach. The claimed key innovation is the introduction of the Geometry Barlow Twins loss function for the alternating optimization. The authors show that their approach works on 3 multimodal datasets and different homography estimation architecutres.\n\nThe alternating optimization framework together with Geometry Barlow Twins loss seem to be a fresh perspective on unsupervised multimodal homography estimation.\n\nWeaknesses\n1. Discussion on the Feasibility and Rationality of the Proposed Method: First, for unsupervised training of networks based on iterative prediction, such as RAFT, to ensure stability during training, related methods [1-2] typically apply some form of direct supervision to the motion predicted by the network. This is different from the approach proposed in this paper, which only uses the Geometry Barlow Twins loss for brightness supervision. Second, how RAFT can be used for homography estimation should also be explained, because it is designed for optical flow estimation. Moreover, the paper does not explain how the proposed Geometry Barlow Twins loss supervises the intermediate stages of iterative prediction, whereas RAFT, IHN, and RHWF, along with methods leveraging their structures [1-2], generally provide details on their supervision mechanisms on the intermediate stages. This raises concerns about the feasibility of the proposed supervision method in this paper. Additionally, the effectiveness of the Modality-Agnostic Representation Learning (MARL) introduced in section 4.3 is questionable because it lacks spatial information in its supervision. As mentioned in section 3.2, the projector removes spatial information from the feature maps. The authors should provide a convincing and thorough explanation for these issues. \n\n2. Doubt about the Effectiveness of the Proposed Method: For example, the paper proposes the alternating optimization (AltO) method but does not provide sufficient experimental results to demonstrate its superiority over other strategies, such as directly cascading all the modules. Furthermore, the paper lacks a comparative demonstration of the features extracted with and without the MARL phase, making the advantages of introducing this phase less convincing.\n\n3. Insufficient Experimental Validation: The paper conducts experiments on only 3 cross-modal datasets, among which only the GoogleMap dataset exhibits significant modality differences. The GoogleEarth dataset mainly consists of images taken in different seasons [3]. Part of the DeepIR dataset is simulated multispectral data [4], which will significantly reduce the difficulty of homography estimation. It would be beneficial to conduct experiments on more challenging multimodal datasets, such as those involving VIS-SAR modalities.\n\n[1] Stone, A., Maurer, D., Ayvaci, A., Angelova, A., & Jonschkowski, R. (2021). Smurf: Self-teaching multi-frame unsupervised raft with full-image warping. In Proceedings of the IEEE/CVF conference on Computer Vision and Pattern Recognition (pp. 3887-3896).\n[2] Liang, Y., Liu, J., Zhang, D., & Fu, Y. (2023). Mpi-flow: Learning realistic optical flow with multiplane images. In Proceedings of the IEEE/CVF International Conference on Computer Vision (pp. 13857-13868).\n[3] Zhao, Y., Huang, X., & Zhang, Z. (2021). Deep lucas-kanade homography for multimodal image alignment. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 15950-15959).\n[4] Sa, I., Lim, J. Y., Ahn, H. S., & MacDonald, B. (2022). deepNIR: Datasets for generating synthetic NIR images and improved fruit detection system using deep learning techniques. Sensors, 22(13), 4721.\n\nPlease refer to the Weaknesses." }, { "confidence": 4, "rating": 7, "review_id": "jsmFxFHfsr", "review_text": "This paper proposes a new unsupervised homography estimation approach for multimodal images. This method is designed as a two-phase optimization framework named AltO. The first phase named \"Geometry Learning\" trains a registration network to align the input multimodal images geometrically. The second phase named \"Modality-Agnostic Representation Learning\" trains an encoder and a projector to extract the image-level features invariant to modality changes. Experimental results demonstrate that AltO outperforms several existing unsupervised approaches on the multimodal registration datasets.\n\n1. The proposed framework is intuitive and interesting. This framework trains a registration network to align the input multimodal images geometrically, and trains another encoder to match the image-level features of the warped multimodal images. This framework has the potential to capture the pixel-level and image-level information in an unsupervised manner.\n2. The organization and presentation of this paper are good. I think I can understand the core idea of this paper.\n\n**1. Some central claims of this paper lack experimental evidence.**\n\n1.1 The \"alternating\" optimization framework is a central design in this paper. However, why is \"alternating\" optimization necessary? Will optimizing the \"geometry loss\" and \"modality loss\" simultaneously hurt performance?\n\n1.2 The superiority of the proposed Geometry Barlow Twins (GBT) loss was not verified. The original Barlow Twins loss can be straightforwardly applied to the proposed model by considering both the spatial axis (indexed with \"h,w\") and batch axis (indexed with \"n\") as the batch dimension. This straightforward implementation should be compared with the proposed GBT loss.\n\n1.3 The proposed approaches should be compared with some recent unsupervised approaches. Here are some approaches with released codes.\n\n[1] Unsupervised global and local homography estimation with motion basis learning. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022.\n\n[2] A Multiscale Framework with Unsupervised Learning for Remote Sensing Image Registration, IEEE Transactions on Geoscience and Remote Sensing, 2022.\n\n**2. This paper did not discuss the recent hand-crafted approaches for multimodal image registration.**\n\nMany recent hand-crafted methods have been published in the top journals, so this kind of approach should not be ignored. The experiment should also compare the proposed approaches with the recent hand-crafted approaches. Here are some hand-crafted approaches with released code.\n\n[3] Histogram of the orientation of the weighted phase descriptor for multi-modal remote sensing image matching. ISPRS Journal of Photogrammetry and Remote Sensing, 2023.\n\n[4] POS-GIFT: A geometric and intensity-invariant feature transformation for multimodal images. Information Fusion, 2024.\n\n**3. The discussion of the motivation is not sufficient.**\n\nThe Introduction section mentioned some typical unsupervised approaches designed for the images from the same modality (e.g., UDHN and biHomE). However, the unsupervised approaches [2,5] designed for multimodal image registration are not discussed. What is the motivation of the proposed method compared with this kind of approach? \n\n[5] A Novel Coarse-to-Fine Deep Learning Registration Framework for Multi-Modal Remote Sensing Images. IEEE Transactions on Geoscience and Remote Sensing, 2023.\n\n**4. This paper misses some references to hand-crafted and unsupervised approaches.**\n\nI have listed some of them in the above weaknesses. The authors should further survey more papers and carefully revise the \"Related Work\" section.\n\nPlease provide more discussions and experimental results to address the above weaknesses.\n\nMoreover, is the 3D reconstruction task related to \"Homography Estimation\" (line 21)? Generally, 3D reconstruction focuses on non-planar scenes, while homography estimation is designed for the planar scenes. Is there some literature that mentions the relationship between 3D reconstruction and homography estimation?" }, { "confidence": 5, "rating": 6, "review_id": "W69deQd2wr", "review_text": "The paper addresses unsupervised homography estimation from multi-modal image pairs. The authors propose to cope with the issue of 1) modality, 2) registration in two distinct networks that are trained in an interleaved fashion. The networks architecture derives from the Barlow Twins framework, with changes in the loss function. Results are illustrated on several public benchmark of small images (128^2) and compares favorably wrt to related unsupervised approach.\n\n1- I enjoy reading the paper. I walked through the paper, first with curiosity and skepticism, then with strong interest. The approach is intuitive (adjust the two representations then compute the transformation) and compelling. I am somehow surprised that it works :) The constrastive-like loss used in Barlow Twins is contributing much for the network to learn the correct solution. \n\n2- Overall, the authors are tackling an important problem (unsupervised learning) for which an original solution is proposed --while based on previous recent work. The methodology is clearly presented. Results are convincing (thought only on small images 128x128) and illustrated on various modality pairs. Quantitative results show improvement wrt related unsupervised work\n\n1- Not a weakness, but a points which could have been discussed: why not simply transforming the inputs into edge maps before learning a matching/homography function (and putting aside the modality discrepancy). It would not be a very fancy approach, but I believe it could be a baseline for comparison. \n\n2- The approach would be more convincing if each of the two modules (GL and MARL) had demonstrated their effectiveness also individually (ie same image pair modality using only GL).\n\n- What is the size of the embedding? What is the training time? Are the Barlow Twins trained from scratch?\n\n- Illustration seems to show strong geometric features (ie lines) in the input images. Is it a strong limitation of the approach?" } ]
zkfCa4oESF
TPR: Topology-Preserving Reservoirs for Generalized Zero-Shot Learning
Pre-trained vision-language models (VLMs) such as CLIP have shown excellent performance for zero-shot classification. Based on CLIP, recent methods design various learnable prompts to evaluate the zero-shot generalization capability on a base-to-novel setting. This setting assumes test samples are already divided into either base or novel classes, limiting its application to realistic scenarios. In this paper, we focus on a more challenging and practical setting: generalized zero-shot learning (GZSL), i.e., testing with no information about the base/novel division. To address this challenging zero-shot problem, we introduce two unique designs that enable us to classify an image without the need of knowing whether it comes from seen or unseen classes. Firstly, most existing methods only adopt a single latent space to align visual and linguistic features, which has a limited ability to represent complex visual-linguistic patterns, especially for fine-grained tasks. Instead, we propose a dual-space feature alignment module that effectively augments the latent space with a novel attribute space induced by a well-devised attribute reservoir. In particular, the attribute reservoir consists of a static vocabulary and learnable tokens complementing each other for flexible control over feature granularity. Secondly, finetuning CLIP models (e.g., prompt learning) on seen base classes usually sacrifices the model's original generalization capability on unseen novel classes. To mitigate this issue, we present a new topology-preserving objective that can enforce feature topology structures of the combined base and novel classes to resemble the topology of CLIP. In this manner, our model will inherit the generalization ability of CLIP through maintaining the pairwise class angles in the attribute space. Extensive experiments on twelve object recognition datasets demonstrate that our model, termed Topology-Preserving Reservoir (TPR), outperforms strong baselines including both prompt learning and conventional generative-based zero-shot methods.
https://openreview.net/pdf/e9ab97ad78449ecd4bb7169860020d90e331f252.pdf
[ { "confidence": 3, "rating": 6, "review_id": "yJGpWFdUIy", "review_text": "This paper proposes a new task, \"generalized zero-shot learning (GZSL),\" in which both seen and unseen objects should be recognized for vision-language tasks. It also proposes a new method based on CLIP that uses the loss in the \"attribute space\" to perform better in both seen and unseen classes. This method is evaluated on various kinds of data sets and evaluated by the harmonic mean of the accuracies of seen and unseen classes.\n\nThe proposed approach using the attribute space seems novel enough, and its effectiveness was verified by a detailed comparison of the other methods and the well-designed ablation studies.\n\n1) It is unclear what is learned in \"learnable attribute tokens.\" It is not so beneficial for unseen classes. It is unclear what information is represented as tokens for seen classes. It may be better to analyze the acquired tokens in more detail.\n\n2) It is difficult to think about the case that we have never seen an object, but we know its attributes quite well. In such a sense, \nI believe this method is more appropriate for few-shot learning.\n\nStrangely, the accuracy increases as the number of learnable tokes increases in Fig 4. (a) AwA2. It would be appreciated if you could provide any insights into this phenomenon." }, { "confidence": 5, "rating": 4, "review_id": "9qxaQicbeD", "review_text": "In this paper the author proposed a dual-space feature alignment module to keep the semantic consistency between visual and attribute. In addition, the authors proposed Topology-Preserving Reservoir (TPR) to tackle the issue into the generalized zero shot learning (GZSL) setting, which utilized the Pearson correlation coefficient to define a topology-preserving loss, which effectively prevents overfitting of the seen and unseen classes. Sufficient experiment demonstrate the effectiveness of the proposed method.\n\n(1)The Paper is well-written, meanwhile, the method is intuitive and easy to understand.\n(2)The proposed method focused on Generalized Zero-Shot Learning (GZSL) to present Topology-Preserving Reservoir to finetune the pre-trained CLIP for better fit the distribution of seen and unseen classes, which seems reasonable.\n(3)Sufficient and significant experiments demonstrate the effectiveness of the proposed method.\n\n(1)The Dual-Space Feature Alignment proposed by the author, which uses a Cross Attention mechanism for cross-modal alignment, lacks innovation.\n(2)The author mentions \"attribute reservoir\" in the article, but essentially it is just a fully connected layer that generates different feature representations through various loss constraints. Additionally, in Figure 2, the attribute reservoir is shown in two states: frozen and trained. I am unsure about when these two states should transition between each other.\n(3)The idea proposed by the author to fine-tune feature distribution using spatial topological structures is intriguing. However, relying solely on the Pearson correlation coefficient to define a topology-preserving loss seems somewhat simplistic.\n\nNA" }, { "confidence": 5, "rating": 6, "review_id": "rpl2xOty9T", "review_text": "The proposed approach targets the generalized zero-shot learning (GZSL) problem for the vision language model (VLM). It is observed that a strong VLM model shows promising results for novel class generalization. Fine-tuning these models for seen classes leads to a loss in generalization capability and poor results for unseen classes. Additionally, a single latent space demonstrates limited ability to adapt to complex visual-linguistic patterns in fine-grained datasets. The paper proposes dual-space alignment, augmenting the latent space with static and learnable tokens. To address the generalization problem post fine-tuning, the paper introduces a Topology-Preserving Reservoir (TPR), which helps preserve the model's generalization ability for unseen classes. The authors conducted extensive experiments across several standard ZSL datasets and explored the impact of various components through ablation studies.\n\n[1] Generalization of unseen classes in VLM is a critical problem. The strong pretrained model also loses its generalization ability, which the author explores, and the proposed model shows a significant impact.\n\n[2] The idea and intuition behind the static and learnable attribute reservoir are interesting. Additionally, TPR helps improve generalization.\n\n[3] The wide-ranging experiments conducted across various ZSL datasets and the ablation studies are satisfactory.\n\n[1] The standard ZSL model assumes that there is a description per class rather than per sample, which is more intuitive since a single description for each class suffices for the model to understand the class, making it cost-efficient. Standard annotation-based attributes often yield better results for ZSL/GZSL settings. For example, [a] demonstrates impressive results for the CUB dataset compared to the proposed complex static, learnable, and description-based model. This issue is particularly observed in fine-grained datasets. Why is this the case?\n\n[2] It is unclear how the base attribute vocabulary is created. At a high level, the author collected a few attributes and obtained LLM embeddings. This description may not be sufficient for reproducibility since the code and data are not provided.\n\n[3] There are multiple variants of TPR (Table-3) in various scenarios where different methods work, making it difficult to apply and choose the best one. What does the author conclude here?\n\n[4] In Table-1 for the SUN dataset, the model shows inferior performance. While we do not expect the model to outperform in all scenarios, a clear description and author observations are required: why is this the case?\n\n[a] Meta-Learned Attribute Self-Interaction Network for Continual and Generalized Zero-Shot Learning, WACV-24\n\nPlease refer to the weakness section." }, { "confidence": 5, "rating": 7, "review_id": "viVsE6h4wt", "review_text": "This paper is a new study that introduces the Generalized Zero-Shot Learning (GZSL) framework within VLMs, aiming to classify both known and novel classes without class partitioning. Key innovations include a dual-space feature alignment module, enhancing latent representations with an attribute reservoir for nuanced visual-linguistic patterns. Additionally, a topology-preserving objective ensures that model adaptations preserve the semantic structure learned by CLIP, thus maintaining generalization across all classes. Extensive experiments across diverse datasets validate the proposed Topology-Preserving Reservoir (TPR) model, demonstrating superior performance over conventional methods in recognizing both seen and unseen classes, underlining its potential for practical applications in complex visual recognition tasks.\n\n1. This paper introcuces a novel research aspect for VLMs: generalized zero-shot learning, which requires the model to identify both seen and unseen concepts at the same time. From my perspective, this proposal could be a great contribution to VLM community.\n2. This paper is well-organized and well-written, which makes it easy to follow. \n3. Extensive experiments,ablation study and visualization results demonstrate the effectiveness and rationality of TPR.\n\nNone in particular.\n\nGZSL is a long standing problem in ML/AI community with many classic solutions, like stacking[1] .etc. Since the authors only compare with VLM-based methods, my concern is about the perfromance of classic methods in VLM-based GZSL tasks, can they achieve surprising performance with well-trained features?\n[1] Chao W L, Changpinyo S, Gong B, et al. An empirical study and analysis of generalized zero-shot learning for object recognition in the wild[C]//Computer Vision–ECCV 2016" } ]
ziehA15y8k
Enhancing Robustness of Graph Neural Networks on Social Media with Explainable Inverse Reinforcement Learning
Adversarial attacks against graph neural networks (GNNs) through perturbations of the graph structure are increasingly common in social network tasks like rumor detection. Social media platforms capture diverse attack sequence samples through both machine and manual screening processes. Investigating effective ways to leverage these adversarial samples to enhance robustness is imperative. We improve the maximum entropy inverse reinforcement learning (IRL) method with the mixture-of-experts approach to address multi-source graph adversarial attacks. This method reconstructs the attack policy, integrating various attack models and providing feature-level explanations, subsequently generating additional adversarial samples to fortify the robustness of detection models. We develop precise sample guidance and a bidirectional update mechanism to reduce the deviation caused by imprecise feature representation and negative sampling within the large action space of social graphs, while also accelerating policy learning. We take rumor detector as an example targeted GNN model on real-world rumor datasets. By utilizing a small subset of samples generated by various graph adversarial attack methods, we reconstruct the attack policy, closely approximating the performance of the original attack method. We validate that samples generated by the learned policy enhance model robustness through adversarial training and data augmentation.
https://openreview.net/pdf/8a54fe72827cc6095b89937198d8112de7d53f3d.pdf
[ { "confidence": 3, "rating": 7, "review_id": "WqsetJBG99", "review_text": "The paper presents a novel approach to enhancing the robustness of Graph Neural Networks (GNNs) against adversarial attacks, specifically in social media contexts such as rumor detection. The authors propose an enhanced maximum entropy inverse reinforcement learning (IRL) method with a mixture-of-experts approach to tackle multi-source graph adversarial attacks. This method aims to reconstruct attack policies, integrate various attack models, and generate additional adversarial samples to improve the robustness of GNN-based detection models.\n\nThe application of inverse reinforcement learning to reconstruct adversarial attack policies is novel and offers a highly interesting perspective on enhancing GNN robustness.\n\nCombined with the Mixture-of-Experts, the method allows for the integration of various attack models, providing comprehensive feature-level explanations and robust adversarial samples for use in adversarial training.\n\nThe generation of good additional adversarial samples for training improves the GNN’s resilience to attacks, which is a significant step towards robust social media analysis. \n\nThe authors use real-world social media datasets to validate the proposed method.\n\nThe proposed method involves multiple components (IRL, mixture-of-experts, bidirectional updates), which can increase the computational complexity and may not be easily scalable.\n\nThe focus is primarily on rumor detection in social media, which, while important, might limit the generalizability of the method to other types of graphs and applications.\n\nSome sections, particularly those involving the theoretical underpinnings of IRL and mixture-of-experts, could be more clearly explained to enhance understanding and accessibility.\n\nNo code is provided. This hinders the exact reproduction of results.\n\nI think the authors use the term \"Threat model\" in an incorrect or at least unorthodox way that will likely be misunderstood in the security community and potentially beyond. Specifically, in line 229, the authors start a paragraph called “Threat model” and they proceed to describe that they use, GCN, number of hidden dimensions, optimizer, etc. This is not what is typically understood as a threat model in literature: a model of a threat actor's capabilities, possible courses of action they may take and how it will impact the operation of a computer system [1]. Speaking of it, including an actual threat model (or rather making the implicitly exiting model explicit) would certainly strengthen the paper and increase acceptance in the security community.\n\nMinor issues:\nTypos/grammar in lines 13, 69, 109, 111\n\n[1] https://www.sciencedirect.com/science/article/abs/pii/S0167404818307478\n\nCould this approach be transferred (with respective modifications) to bit flips, e.g., [1]?\n\nDid you analyze the impact of varying Rich-club coefficients in the datasets?\n\n[1] https://arxiv.org/abs/2311.01205" }, { "confidence": 4, "rating": 8, "review_id": "2JyhNgu2vb", "review_text": "This paper addresses the challenge of adversarial attacks on Graph Neural Networks (GNNs) employed in social media tasks, such as rumor detection. The authors introduce MoE-BiEntIRL, a method that leverages a mixture-of-experts approach combined with inverse reinforcement learning (IRL) to reconstruct and explain adversarial attack policies. The objective of this method is to enhance the robustness of GNNs by generating additional adversarial samples for training, thereby improving resilience against attacks. MoE-BiEnt\\IRL incorporates mechanisms for precise sample guidance and bidirectional updates, which are designed to optimize both the accuracy and the speed of policy learning.\n\n1. Innovative Approach: The introduction of MoE-BiEntIRL represents a significant innovation, particularly through its application of a mixture-of-experts approach to manage diverse adversaries and provide detailed feature-level explanations. \n2. Real-world Validation: The method is validated on actual datasets from Weibo and Pheme, demonstrating its practical applicability for improving the robustness of GNNs in social media rumor detection scenarios. \n3. Experimental results, focusing on policy reconstruction and adversarial training, effectively illustrate the method’s robustness and efficacy.\n4. The approach facilitates a deeper understanding of attack behaviors through feature-level explanations, aiding platform operators in enhancing system defenses.\n\n1. The proposed method involves multiple stages and sophisticated mechanisms, potentially complicating its implementation. \n2. Scalability Discussion: The paper would benefit from a more extensive discussion on the scalability of the method, particularly concerning its applicability to large social media graphs. \n3. Experimental Setup Details: Enhancing the description of the experimental setup would significantly improve the reproducibility of the study and aid other researchers in replicating the results.\n4. Typos and grammar errors could be avoided.\n\n1.\tScalability Analysis Could you elaborate on how your method scales when applied to very large social media graphs? Any additional insights or preliminary results on this matter would be highly informative. \n2.\tCould you provide more detailed information regarding your experimental setup? Additional details would aid in understanding how to replicate your study effectively." }, { "confidence": 5, "rating": 5, "review_id": "azp6illflz", "review_text": "This work studies the problem of reconstructing attack policies using collected adversarial samples to enhance the robustness of GNN-based models in social network tasks, specifically rumor detection. The authors propose the MoE-BiEntIRL framework, which employs a mixture-of-experts approach to learn optimal policies from diverse adversaries, and provides feature-level explanations by estimating interpretable linear reward functions. Experiments on two real-world rumor detection datasets validate the effectiveness of MoE-BiEntIRL.\n\n1. The authors investigate the rumor detection problem from the novel perspective of reconstructing attack policies.\n\n2. The paper is well-written and well-organized, with motivating illustrations of the problem.\n\nWhile the proposed problem and approach are generally novel and intriguing, the following issues regarding experiments require further clarification:\n\n1. **Table 2:** What makes the policies on Pheme significantly harder to recover than the policies on Weibo?\n\n2. **Table 3:** The results are not clearly illustrated and explained.\n - For instance, it appears that the column under \"w/o Att.\" reflects test accuracy (%), while results under other columns reflect accuracy decline in actual numbers. Please align the representations for consistency.\n - If \"w/o Att.\" refers to GCN's rumor detection performance without adversarial attacks, it is surprising to see that GCN only achieves ~70% test accuracy on the Weibo dataset with binary rumor / non-rumor labels. The authors claim that the Weibo dataset is adopted from existing work [1], which reported over 80% test accuracy on Weibo even using simple models such as TF-IDF or GRU. Please elaborate on the causes for this significant performance discrepancy, e.g, data differences and model structure differences.\n\n3. **Computational Efficiency:** Given the complexity of the model structure illustrated in Figure 2, it would be beneficial to benchmark the computational efficiency of the proposed approach against the baselines in Table 3.\n\n[1] Changhe Song, Cheng Yang, Huimin Chen, Cunchao Tu, Zhiyuan Liu, and Maosong Sun. Ced: Credible early detection of social media rumors. TKDE, 33(8):3035–3047, 2021.\n\nPlease refer to the weaknesses section." }, { "confidence": 4, "rating": 8, "review_id": "vMmJnD8pdl", "review_text": "The paper presents a novel method, MoE-BiEntIRL, which combines a mixture-of-experts approach with inverse reinforcement learning to enhance the robustness and explainability of adversarial attacks on GNNs. The method addresses the critical issue of stabilizing GNNs used in social media for rumor detection, demonstrating significant practical relevance. Strengths include its innovative approach, comprehensive mechanisms for improving attack policy accuracy, and robust evaluation results. However, the paper could benefit from clearer explanations of the method, detailed parameter sensitivity analysis, enhanced experimental reproducibility, expanded comparative baselines. Despite these minor weaknesses, the overall contribution and practical importance of the research are compelling.\n\n1. The MoE-BiEntIRL method presents a highly novel application of a mixture-of-experts approach combined with inverse reinforcement learning to address adversarial attacks on Graph Neural Networks (GNNs). This innovative approach stands out in its ability to not only enhance the robustness of GNNs but also to provide explainability to the attack policies.\n2. The inclusion of precise sample guidance mechanisms and a bidirectional update mechanism demonstrates thoroughness in approach, aiming to improve both the accuracy of attack policy reconstruction and the speed of policy learning. This comprehensive approach adds substantial value to the proposed solution.\n3. The evaluation methods employed in this study are robust, validating the effectiveness of the proposed method. The results are compelling, showing notable improvements in the robustness of GNNs.\n\n1. Although the proposed method is innovative, some aspects of the algorithm could benefit from clearer explanations. \n2. A minor issue is that the sensitivity of the model to various parameters is not thoroughly explored. A brief analysis or guidance on parameter selection could aid in the practical application of the method.\n3. While the method is novel, there is little discussion on its computational complexity. Including an analysis of the computational cost and suggesting optimizations could enhance the practical feasibility of the approach.\n\n1. Could you provide a simple illustrative example or additional details to clarify how the precise sample guidance mechanism and the bidirectional update mechanism work in the MoE-BiEntIRL method?\n2. Can you provide a brief overview of the computational complexity of your proposed method, along with any potential optimizations that could be considered to enhance practical feasibility?" } ]
ziYC4FHRNr
Entrywise error bounds for low-rank approximations of kernel matrices
In this paper, we derive *entrywise* error bounds for low-rank approximations of kernel matrices obtained using the truncated eigen-decomposition (or singular value decomposition). While this approximation is well-known to be optimal with respect to the spectral and Frobenius norm error, little is known about the statistical behaviour of individual entries. Our error bounds fill this gap. A key technical innovation is a delocalisation result for the eigenvectors of the kernel matrix corresponding to small eigenvalues, which takes inspiration from the field of Random Matrix Theory. Finally, we validate our theory with an empirical study of a collection of synthetic and real-world datasets.
https://openreview.net/pdf/b7d1213d226d72a537e0292965c1e860cf9010e7.pdf
[ { "confidence": 3, "rating": 6, "review_id": "M8RohoUDnU", "review_text": "This paper is first to establish entrywise guarantees for low rank approximation of kernel matrices when kernel eigenvalues satisfy either polynomial or exponential decay. More specifically, in the $\\alpha$-polynomial decay setting, entrywise error scales as $O(n^{-\\\\frac{\\alpha-1}{\\\\alpha}} \\\\log n)$ for rank $d = \\Omega(n^{1/\\\\alpha})$, while for $(\\\\beta,\\\\gamma)$-exponential decay error scales like $O(1/n)$ for $d > \\\\log^{1/\\\\gamma}(n^{1/\\\\beta})$. In order to establish such results, authors prove that eigenvectors corresponding to small eigenvalues are completely incoherent/delocalized i.e. have bounded entries of size $O(1/\\\\sqrt{n})$. Technical novelty stems from the fact that entries of the kernel matrix are dependent and have non-zero mean.\n\n1) This is a first result showing entrywise error guarantees for low rank approximation of kernel matrices. \n\n2) Proof sketches of two main theorems are clear and easy to follow.\n\n3) Strongest technical contribution of this paper is proof given in Appendix D that, simply speaking, shows that the norm of projection of vector 1 on the subspace spanned by $n-d'$ eigenvectors with smallest eigenvalues is vanishing sufficiently fast.\n\n4) Experiments are complementing theoretical results well.\n\n1) Although authors claim that Lemma 1 is a novel concentration result, it seems to be only a slight generalization of Lemma 68 in Tao and Vu [2011], and is proved essentially using the same argument as that in the proof of Lemma 68. \n\n2) Although I appreciate proof sketches of Theorems 1 and 2 in the main text, I believe it would be more useful to add more information about the proof deferred to Appendix D since this is the most novel and interesting part of the proof.\n\n3) It is not clear whether assumption (R) is necessary and how general it is apart from the two special cases given in Section 3.1.\n\n1) Could you elaborate more on tightness of your results? How do they compare with already established results for Frobenius and spectral norm? Are there any known lower bounds for entrywise estimation?\n\n2) Although assumptions (E) and (P) seem to be very natural, I am not sure about assumption (R). Do results hold for any $a$ and $b$ such that $1\\\\leq a < b/16$? Since the final error bound does not depend on $a$ and $b$, do you think this assumption can be relaxed? \n\n3) Although I think that double descent observation is interesting on its own, the evidence for it is vague. Is this behavior observed for a range of percentile values or does it happen only around 99.95 percentile? Also from figures in the paper seem like it appears only for not very smooth kernel functions. It would be beneficial to have more convincing evidence whether this phenomenon occurs because of your choice of 1) kernels, 2) percentiles, 3) entrywise errors or something else.\n\n\nTypos and other comments\n\n(106) maximum entrywise \"error\" (missing)\n\n(186) should be $ \\\\hat{u}_i(1)$, instead of $ \\\\hat{u}_l(1) $\n\n(647) later on\n\nI would prefer if you do not use $(a,b)$ both for constants in assumption (R) and for vectors in the proof of Theorem 2.\n\nIn introduction you cite [Lei, 2019] for establishing entrywise error bounds for reinforcement learning - but I could not find any references to RL in that paper. Is this a typo? For example, I believe the following papers are more suitable for that particular reference: \n\n- Pananjady, Ashwin, and Martin J. Wainwright. \"Instance-dependent ℓ∞-bounds for policy evaluation in tabular reinforcement learning.\" IEEE Transactions on Information Theory 67.1 (2020): 566-585.\n- Shah, Devavrat, et al. \"Sample efficient reinforcement learning via low-rank matrix estimation.\" Advances in Neural Information Processing Systems 33 (2020): 12092-12103.\n- Stojanovic, Stefan, Yassir Jedra, and Alexandre Proutiere. \"Spectral entry-wise matrix estimation for low-rank reinforcement learning.\" Advances in Neural Information Processing Systems 36 (2023): 77056-77070." }, { "confidence": 3, "rating": 5, "review_id": "0uHjWUaUsG", "review_text": "The paper focuses on deriving entrywise error bounds for low-rank approximations of kernel matrices using truncated eigen-decomposition. It addresses the statistical behavior of individual entries in such approximations under assumptions of polynomial eigenvalue decay or exponential decay. The authors also provide empirical studies on synthetic and real-world datasets.\n\n1. The paper is clear and well written. The proof seems to be solid.\n2. The entrywise error bound is new to the community. \n3. The assumptions on polynomial/exponential eigenvalue decay seem general and cover lots of common kernels.\n4. Some statements about random matrix theory and concentration inequalities are provided (e.g., Lemma 1), which could be independently useful to the community.\n\n1. The assumptions on the eigenfunctions corresponding to the assumptions of eigenvalue decay are hard to verify for general kernels, especially the part on the rate of decay ($\\alpha >2r+1,\\beta> 2s$). Moreover, I wonder if these inequalities are required to guanrantee the uniform convergence of the kernel (I note that $k(x,y)=\\sum_{i=1}^{\\infty}\\lambda_i u_i(x)u_i(y)$ converges uniformly under these assumptions). But in the proof I see these assumptions are used in a way like $\\beta-s\\ge \\beta/2$ (e.g., Line 590). Thus, I am not sure if these asssumptions are necessary for derivation.\n2. Assumption (R) seems not natural (why is $1\\le a < b/16$ is needed?) and also I do not know how to verify this. Could you provide some examples with $\\Gamma_i \\neq 0$ under Assumption (R)?\n3. The contributions are undetermined. The proof of the main theorem seems to heavily rely on past random matrix theory works (Tao and Vu [2011], Erdős et al. [2009 a,b]). With assumptions (E)/(P) and (R) and the previous works, the proof is straightfoward. And I am not sure about the importance of entrywise error bound.\n\nMinor typos: \n1. Line 578/588 hypotheses-> hypothesis\n2. Line 539/581 miss a period\n\n1. (Line 82) What do you mean by \"infinite sample limit of $\\frac{1}{n}K$\"?\n2. Could you provide more general examples that completely follow the assumption (E)/(P)?\n3. Is this error bound optimal? Are there any lower bound results?\n4. Is it possible (or are there any hardness results) to compute or approximate $\\text{argmin}_{K':\\text{rank}(K')=d}$ $ \\||K-K'\\||$ w.r.t. the sup norm?\n5. Regarding the importance of entrywise error bound, could you provide more concrete examples?" }, { "confidence": 4, "rating": 5, "review_id": "oQ1pMXZdQV", "review_text": "The authors consider the kernel matrices, formed by $n$ vectors i.i.d. drawn from a $p$-dimensional probability distribution $\\rho$. Under several assumptions on the associated kernel operator on $L^2_{\\rho}$, including the positive definiteness of the kernel and decay condition on the eigenvalues of the kernel, the authors prove an estimate on individual entries of the matrix kernel and those of the low-rank approximation of the kernel. Numerical experiments on the estimate error are done with both synthetic datasets and real-world datasets.\n\n- The problem is a very fundamental one and it is considered both analytically and numerically.\n- The writing is very clear and easy to read.\n\n- Lemma 1 is wrong, and thus the proofs of the main results do not work. \nConsider an extreme case where $a=0$ with probability $1$. Then, since $\\pi$ is an orthogonal projection, $\\| \\pi_H(a) \\| = 0$ and thus Lemma 1 fails. The main issue is that in the proof of Lemma 1, if $S_1 = \\sum p_{ii} (\\xi_i^2 - 1)$, then $E[S_1^2] = \\sum_{i, j} p_{ii} p_{jj} E[\\xi_i^2 - 1] E[\\xi_j^2 - 1]$, which is different from $\\sum_i p_{ii}^2 E[(\\xi_i^2 - 1)^2]$ in (17), unless $E[\\xi^2]=1$. As a result, (17) and the estimates on $P(E_+)$ and $P(E_-)$ fail.\n-> The proofs of the main results would work after modifying Lemma 1 as suggested by the authors.\n\n- Is it possible to prove Lemma 1 with additional assumptions that are suitable to the current setting?\n- In the proof of Lemma 1, there are other minor problems listed below.\n1) In line 618, $\\xi_i \\in [0, 1]$ is wrong since the mean $\\bar{x}$ is subtracted.\n2) In the equation below line 621, why $\\| \\pi_H(x)\\|^2 = \\| \\pi_H(\\bar{x}) \\|^2 + \\| \\pi_H(\\xi) \\|^2$?\n3) In the equation below line 621 and several other places, $X$ should be $x$. Also, $\\bar{\\xi}$ should be $\\xi$." } ]
zgh0ChWocO
Learning the Optimal Policy for Balancing Short-Term and Long-Term Rewards
Learning the optimal policy to balance multiple short-term and long-term rewards has extensive applications across various domains. Yet, there is a noticeable scarcity of research addressing policy learning strategies in this context. In this paper, we aim to learn the optimal policy capable of effectively balancing multiple short-term and long-term rewards, especially in scenarios where the long-term outcomes are often missing due to data collection challenges over extended periods. Towards this goal, the conventional linear weighting method, which aggregates multiple rewards into a single surrogate reward through weighted summation, can only achieve sub-optimal policies when multiple rewards are related. Motivated by this, we propose a novel decomposition-based policy learning (DPPL) method that converts the whole problem into subproblems. The DPPL method is capable of obtaining optimal policies even when multiple rewards are interrelated. Nevertheless, the DPPL method requires a set of preference vectors specified in advance, posing challenges in practical applications where selecting suitable preferences is non-trivial. To mitigate this, we further theoretically transform the optimization problem in DPPL into an $\varepsilon$-constraint problem, where $\varepsilon$ represents the minimum acceptable levels of other rewards while maximizing one reward. This transformation provides intuitive into the selection of preference vectors. Extensive experiments are conducted on the proposed method and the results validate the effectiveness of the method.
https://openreview.net/pdf/4b0324785fb814e07ff3e021f3a6a8e3db3d00a6.pdf
[ { "confidence": 3, "rating": 6, "review_id": "1RUVaV6NOa", "review_text": "This paper introduces a new way to balance multiple rewards with some long-term rewards potentially missing. It does so by using Pareto Policy Learning of optimizing each reward subject up to the tradeoff frontier. This can be more practical than simple linear weighting since the linear weighting strategy applies the constant weight regardless of the amount of conflict between pairs of objectives. Empirically, the papers show that the approach is superior to linear on two synthetic tasks with some real data. Overall I think the paper is promising and adding more realistic empirical evaluation can add values to the current state of the paper.\n\n- Learning to combine multiple rewards is an important and well-motivated question, and has wide ranging implications.\n- The method proposed is mathematically sound. The paper shows theoretically that the input parameters can be interpreted as a form of worst case value on each objective. \n- The paper explains how the most popular approach of linear weighting can fall short, derives the method through first principles, and empirically demonstrates that the proposed method is superior.\n\n- The main weakness of the paper is that the experimentation is rather limited. The experiment uses partial real data with synthetic generation of short-term and long-term rewards. For example, in robotic planning, the authors could show how their approach helps balance the long-term reward (e.g. goal reaching) / short-term reward (e.g. minimizing jerk). This is just an example, but including other more real-world planning and RL problems would seem beneficial.\n- It seems that compared to linear weighting, the proposed method seeks more short-term reward but is not necessarily better in terms of long-term reward. It may not be a weakness but reading the table does strike me that the method is more “short-sighted.”\n\n- The separation between short-term and long-term reward is practically meaningful, but mathematically the only difference is that one reward can be missing and the other is fully observable, since the Pareto Policy Learning treats all objectives the same. Do we necessarily need these separate definitions? Can we put everything as a long-term reward that can be missing sometimes?\n- How are the weighting and preference vectors chosen? Have the author considered running a sweep over different configurations to compare against linear weighting? Apologies if I overlook this detail from the paper." }, { "confidence": 2, "rating": 4, "review_id": "boI9Q1izTS", "review_text": "This paper attempts to address the challenge of learning the optimal policy for balancing multiple long-term and short-term rewards. The authors point out that the existing linear weighting method leads to a sub-optimal policy. To address this limitation, the authors propose formulating formulate the problem as a multi-objective optimization problem. They utilize the Lagrange algorithm to use preference vectors to solve the formulated multi-objective optimization problem and aim to learn the policy to meet Pareto optimization. In order to decide the preference vectors, the authors propose establishing the connection between the optimization problems and the ε-constraint problem. Experiments on IHDP and JOBS demonstrate the efficacy of the proposed method.\n\n1.\tThe multi-object problem is practical in both reinforcement learning and other optimization scenarios. The paper provides a good summary of the limitations of the existing linear weighting method and introduces a novel perspective on solving the problem by resorting to the Lagrange algorithm and Pareto optimization.\n2.\tThe author has a solid mathematical foundation and is able to provide detailed mathematical descriptions and solutions to the proposed optimization problems.\n\n1. The authors point out that the linear weighting method is suboptimal. However, there is no explanation in the method section or corresponding experiments to demonstrate that the proposed method (i.e. DPPL) is optimal.\n\n2. In line 38, the authors claim that when some of the rewards are interrelated, the linear weighting method can only achieve a suboptimal solution. The claim may not be rigorous as the linear weighting method might be able to model the relationship among the rewards. More explanation and experiments are required.\n\n3. In line 95, the definition of Pareto optimality, the condition for Pareto optimality by the author is to find the $\\theta$ that makes all $\\bar{\\mathcal{V}}$ optimal. However, is it possible that the $\\theta$ is not optimal for some $\\bar{\\mathcal{V}}$ but is optimal for the overall $\\bar{\\mathcal{V}}$?\n\n4. Some mathematical symbols and proprietary terms in the paper are not explained clearly. For example, what does the $e$ in line 110 mean? What does MOP represent? Does MOP represent multi-objective problems? What do $v$ and $R_{+}$ mean in line 171? What does the KKT condition mean? Is it the KKT condition in the Lagrange algorithm? What is the difference between the two descent directions $d_{rt}$ and $d_t$? There are many similar situations in the paper. I suggest providing necessary explanations for each noun and symbol that appears for the first time.\n\n5. In section Simulating Output and section Experimental Details, many parameters are defined by the authors themselves, but most of them do not have reasons or ablation experiments. For example, why is the number of preference vectors 10? In Line 253 to Line 254, why are some parameters truncated normal distributions and some Gaussian distributions?\n\n6. In Table 1 on the L-REWARDS metric, the proposed method is comparable to the linear weighting method. However, the authors claim that for most of the preference vectors, DPPL's solutions have better performance.\n\n7. In Figure 1, it seems that the effect on the $\\delta{w}$ from the missing rate and T is not obvious for either the proposed method or LW. More explanation is needed.\n\nPlease see the comments above." }, { "confidence": 1, "rating": 5, "review_id": "XYRn964rBA", "review_text": "This paper studies the tradeoff between short-term and long-term rewards. The authors formulate the policy learning problem as a multi-objective optimization problem and propose a decomposition-based Pareto policy learning method. I only had experience in reinforcement learning in robotics five years ago. I tried my best to understand the paper, but I am not sure about my rating and comments.\n\n- This paper studies a quite interesting and important problem, and the proposed methods seem effective on these two benchmarks.\n- The paper is well-organized, the division is relatively easy to follow, and the proposed method is well-motivated.\n\n- Only the linear weighting method is used as the baseline. I am wondering if there are any other methods that can be used for comparison. If not, why? Since both IHDP and JOBS are widely used.\n\nPlease see the weakness." }, { "confidence": 3, "rating": 6, "review_id": "GPykjiAt7X", "review_text": "This paper proposes a framework for solving multi-objective optimization problems: multi-objective optimization problems are divided into sub-problems in different regions by setting different preference vectors. The parameter optimization direction of the sub-problem can be easily solved by transforming it into a dual problem through the KKT condition, and a Pareto optimal solution of the original problem can be obtained by solving the sub-problem. This paper uses this framework to balance the optimal strategy learning under multiple short-term rewards and long-term rewards and achieves better and more stable performance than the traditional linear weighted method in the constructed experimental environment.\n\n1. This paper reveals in detail the connection between the proposed method and the linear weighted method and the epsilon-constrained optimization method. Based on this connection, the epsilon-constrained optimization method can provide interpretability for the method in this paper.\n2. The method in this paper theoretically overcomes the suboptimality problem of the linear weighted method and avoids the situation where the epsilon-constrained optimization method does not have a feasible solution.\n3. This paper obtains better and more stable results than the epsilon-constrained optimization method in the optimal strategy learning problem under multiple short-term rewards and long-term rewards constructed by the author.\n\n1. This paper mainly proposes an important multi-objective optimization algorithm and compares it with two existing algorithms in theory. However, the title of this paper seems to be just a specific application scenario of the algorithm. In what other scenarios can this algorithm be applied?\n2. The experimental part is mainly conducted in a constructed environment, and it is unclear how difficult it is in the field of causal inference.\n3. The v in line 171 is missing \\bar. In Appendix B, t in line 5 of Algorithm 1 should start from 0.\n\n1. How to deal with a situation where long-term rewards are missing? Should the data be ignored when solving the network?\nI think the optimization method proposed in this paper and the connection with related methods are very interesting, and I will improve my score as appropriate." } ]
zeaBrGv7Ll
SeeClear: Semantic Distillation Enhances Pixel Condensation for Video Super-Resolution
Diffusion-based Video Super-Resolution (VSR) is renowned for generating perceptually realistic videos, yet it grapples with maintaining detail consistency across frames due to stochastic fluctuations. The traditional approach of pixel-level alignment is ineffective for diffusion-processed frames because of iterative disruptions. To overcome this, we introduce SeeClear--a novel VSR framework leveraging conditional video generation, orchestrated by instance-centric and channel-wise semantic controls. This framework integrates a Semantic Distiller and a Pixel Condenser, which synergize to extract and upscale semantic details from low-resolution frames. The Instance-Centric Alignment Module (InCAM) utilizes video-clip-wise tokens to dynamically relate pixels within and across frames, enhancing coherency. Additionally, the Channel-wise Texture Aggregation Memory (CaTeGory) infuses extrinsic knowledge, capitalizing on long-standing semantic textures. Our method also innovates the blurring diffusion process with the ResShift mechanism, finely balancing between sharpness and diffusion effects. Comprehensive experiments confirm our framework's advantage over state-of-the-art diffusion-based VSR techniques.
https://openreview.net/pdf/01c997cd807103ddbfa717a7445b4e1837ebeb53.pdf
[ { "confidence": 4, "rating": 6, "review_id": "rBDNY59obz", "review_text": "The authors propose SeeClear for Video Super-Resolution (VSR). SeeClear is a diffusion-based method that improves restoration performance by introducing semantic priors. The authors design an Instance-Centric Alignment Module (InCAM) and Channel-wise Texture Aggregation Memory (CaTeGory) to utilize semantic information effectively. Comparisons on multiple datasets demonstrate that the proposed method achieves state-of-the-art performance.\n\n1. The paper introduces semantic priors to achieve spatial modulation and temporal correlation, improving diffusion-based VSR performance. This idea is both reasonable and effective.\n2. The authors design the Instance-Centric Alignment Module (InCAM) to align using semantic information, avoiding pixel inconsistencies and being well-suited for diffusion models.\n3. Additionally, the authors propose the Channel-wise Texture Aggregation Memory (CaTeGory) to transfer semantic information between different frames.\n4. Comparisons with state-of-the-art methods demonstrate the effectiveness of the proposed method.\n5. The paper is well-organized, with clear and aesthetically pleasing layouts, figures, and tables.\n\n1. The method uses pre-trained models to extract semantic information, introducing significant additional computation, which limits the method's applicability. Meanwhile, the paper lacks comparisons of complexity and parameter counts.\n2. The method lacks experimental support for some critical hyperparameters, such as the choice of k in InCAM and the number of frames used in SR.\n3. The paper proposes using wavelet transform to improve UNet but lacks experimental justification for why simple downsampling and upsampling wouldn't be more efficient.\n4. Figure 1, while aesthetically pleasing, is challenging to understand. It would be better to clearly explain the network structure (e.g., Figure 8) and the inference process.\n\n1. Why do the comparison methods in Table 1 use different numbers of frames? If the same frame is used, what is the performance like?\n2. In the ablation study (model 2, Table 2), how to use semantic conditions without MFSA, InCAM, and CaTeGory?\n3. In InCAM, what is the value of k in top k, and how is it determined? Is there experimental support for this choice?\n4. Others see weaknesses." }, { "confidence": 4, "rating": 5, "review_id": "PZceSyv6zD", "review_text": "The paper introduces a novel video super-resolution framework leveraging semantic distillation to enhance pixel condensation in diffusion-based models. SeeClear addresses stochastic fluctuations by using a Semantic Distiller and a Pixel Condenser to extract and upscale semantic details from LR frames. The framework includes an Instance-Centric Alignment Module and a Channel-wise Texture Aggregation Memory to improve temporal consistency and visual quality. Experimental results demonstrate SeeClear's superiority over state-of-the-art diffusion-based VSR techniques.\n\n- The combination of semantic distillation and pixel condensation is novel and effectively addresses the challenges of maintaining detail consistency across frames in diffusion-based VSR.\n- The Instance-Centric Alignment Module (InCAM) and Channel-wise Texture Aggregation Memory (CaTeGory) significantly improve short-term and long-term temporal coherence.\n- The paper provides extensive experiments to demonstrate SeeClear's advantages over sotas across multiple benchmarks.\n\n- Lack of computation analysis. Diffusion-based methods are often criticized for unbearable inference time, so it would be better to list params, runtime, and FLOPs/MACs for a fair comparison.\n- Lack of an ablation study on the wavelet transform which is introduced in Section 3.1.\n- Table 2 is incomplete, making it difficult to assess the effect of the CaTeGory.\n- The Other baselines such as VRT and IconVSR are also evaluated on Vimeo-90K-T and UDM10 datasets. Could you complete it for a fair comparison?\n- Figure 7 needs more explanation.\n\nDiffusion-based models usually show poor performance on PSNR (e.g., StableSR and Reshift), but SeeClear demonstrates a significant improvement. Could you analyze which parts of SeeClear contribute to this improvement?\nPlease refer to the weaknesses part above." }, { "confidence": 4, "rating": 6, "review_id": "7pjzbCOjNV", "review_text": "This paper presents a diffusion-based video super-resolution method, and proposes Instance-Centric Alignment Module and Channel-wise Texture Aggregation Memory. The former leverages a pre-trained open-vocabulary segmentation model (i.e., OpenSeeD), which is utilized to perform alignment within video clips by modulating the spatial and temporal features. The latter leverages channel-wise attention and memory mechnism to better super-resolve the video frames. The results on publich benchmarks indicate that the proposed method achieves state-of-the-art perceptual performance.\n\n1. The proposed method achieves state-of-the-art perceptual results on REDS4 and Vid4.\n\nAlthough the proposed method achieves promising results on the public benchmarks, there are some concerns that greatly affect the rating of this paper.\n\n1. The presentation of the method needs to be improved. The readability of the paper is unsatisfactory. The technical details and the rationale behind it is not clearly described and explained.\n(a) The main figure (Figure 1) is ambiguous. It is hard to understand the workflow of the framework based on this figure. It is also hard to see the connection among different modules.\n(b) In the abstract, what is the \"conditional video generation\" (L6)? I do not see any pre-trained conditional video generation module in the described method. Maybe it should be rephrased.\n(c) In L206-207, what is the role of \"randomly initialized tokens\"? And what is specific role of the encoder-decoder module?\n(d) In L187-188, are the \"semantic tokens\" actually text embeddings ? What is the difference?\n(e) In L223, how to divide channels into different groups and what is rationale behind it?\n(f) It is hard to understand Eq. (16), (17) and (18). From (17) and (18), it seems T_j is used to calculate itself, which is confusing.\n(g) The choice of mathematical notations is sub-optimal and confusing.\n(h) In L149, I think the \"belta_t\" should be \"yita_t\".\n\n2. The novelty of this paper is limited.\n(a) Some of the modules are based on existing methods. For example, the way of introducing semantic features is similar to SFT (but no comparison in the paper); the multi-frame self attention is from [21].\n(b) The proposed blurring ResShift is a modification version based on ResShift, but the rationale behind it is not fully explained. Also, there is no direct ablation.\n\n3. The comparison with other related methods are not thorough. \n(a) The authors should explicitly compare with ResShift [33], since residual shifting technique is also exploited (but no citation in L48). Also, there is no comparison with it in Sec. 2.2.\n(b) The authors should compare with Upscale-A-Video [36], another diffusion-based video super-resolution method. Also, it is recommended to compare the performance of [36].\n(c) The authors should compare with SFT[28], another method also leveraging semantic segmentation information.\n\n4. The proposed method is not fully ablated. There is no direct ablation for exploitation of DWT and blurring resshift.\n\n5. Some of the statements could be inappropriate. \n(a) In L35-36, I think it is hard to reach the given conclusion from [8]. Please elaborate.\n(b) The naming of \"semantic distiller\" could be inappropriate. The pre-trained semantic segmentation model is directly leveraged and frozen. I don't see any distillation.\n\n1. In the abstract, what is the \"conditional video generation\" (L6)? I do not see any pre-trained conditional video generation module in the described method.\n2. In L206-207, what is the role of \"randomly initialized tokens\"? And what is specific role of the encoder-decoder module?\n3. In L187-188, are the \"semantic tokens\" actually text embeddings ? What is the difference?\n4. In L223, how to divide channels into different groups and what is rationale behind it?\n5. It is hard to understand Eq. (16), (17) and (18). From (17) and (18), it seems T_j is used to calculate itself, which is confusing. Please elaborate.\n6. In InCAM, the way of introducing semantic features seems similar to SFT. Please compare with it and illustrate the significance of the proposed module.\n7. Please provide comparison with the following related methods, and illustrate the novelty of the proposed modules. \n(a) The necessity and rationale of blurring ResShift, and the ablation study.\n(b) Upscale-A-Video [36]. And please compare performance with it quantitatively.\n8. In L35-36, I think it is hard to reach the given conclusion from [8]. Please elaborate." }, { "confidence": 5, "rating": 3, "review_id": "P98EqfWz2c", "review_text": "The paper presents a framework for video super-resolution (VSR) that improves temporal coherence and high-resolution detail generation. The proposed method, SeeClear, integrates a Semantic Distiller and a Pixel Condenser to extract and upscale semantic details from low-resolution frames. The framework employs an Instance-Centric Alignment Module (InCAM) and Channel-wise Texture Aggregation Memory (CaTeGory) to enhance inter-frame coherence and incorporate long-standing semantic textures. The methodology also introduces a blurring diffusion process with the ResShift mechanism to balance sharpness and diffusion effects. Experimental results show that SeeClear outperforms state-of-the-art diffusion-based VSR techniques in terms of perceptual quality and temporal consistency.\n\n1. The SeeClear framework introduces a combination of semantic distillation and pixel condensation, which significantly enhances video super-resolution.\n2. The Instance-Centric Alignment Module (InCAM) and Channel-wise Texture Aggregation Memory (CaTeGory) improve the temporal coherence of the generated high-resolution videos.\n3. The integration of blurring diffusion with the ResShift mechanism effectively balances sharpness and diffusion, leading to high-quality detail generation.\n\n1. While the method demonstrates robust restoration capabilities, it may still struggle with accurately restoring tiny objects or intricate structures, especially under severe degradation conditions.\n2. The method has been tested primarily on specific benchmark datasets. Its performance in real-world applications, where video degradation processes are more varied and unpredictable, remains to be thoroughly evaluated.\n3. The experiments are not sufficient and should be improved.\n\n1. The performance of the proposed method is not significant. In Table 1, the improvement is very marginal or is worse than other methods. Moreover, in Figure 4, the generated texture is comparable to other methods.\n\n2. It would be better to compare more methods. (a) Transformer-based (e.g., VSR Transformer) or RNN-based method (e.g., BasicVSR++); (b) Diffusion-based image restoration methods (e.g., DDRM, DDNM, DeqIR, etc). (c) Compare methods trained with more frames (e.g., VRT-16).\n\n3. The authors should compare the efficiency, including model size, training/inference time and FLOPs. The efficiency comparisons can demonstrate the effectiveness of the proposed method." } ]
zeYyq0GpXO
Exploring Context Window of Large Language Models via Decomposed Positional Vectors
Transformer-based large language models (LLMs) typically have a limited context window, resulting in significant performance degradation when processing text beyond the length of the context window. Extensive studies have been proposed to extend the context window and achieve length extrapolation of LLMs, but there is still a lack of in-depth interpretation of these approaches. In this study, we explore the positional information within and beyond the context window for deciphering the underlying mechanism of LLMs. By using a mean-based decomposition method, we disentangle positional vectors from hidden states of LLMs and analyze their formation and effect on attention. Furthermore, when texts exceed the context window, we analyze the change of positional vectors in two settings, i.e., direct extrapolation and context window extension. Based on our findings, we design two training-free context window extension methods, positional vector replacement and attention window extension. Experimental results show that our methods can effectively extend the context window length.
https://openreview.net/pdf/93def3ab0f03ff754a0ecc781166fd37fd5b563c.pdf
[ { "confidence": 4, "rating": 6, "review_id": "2EEgqJtRt4", "review_text": "This paper disentangles positional vectors from the hidden states of a pretrained Transformer language model to facilitate the understanding of length extrapolation. After a series of analyses, this paper proposes two context extending techniques. Experiments show that the proposed methods lower the perplexity on the task of language modeling.\n\nIt's always good to have a mechanistic interpretability view of the hidden states of language models. The findings presented in this paper might inspire follow-up work along this direction.\n\nThe experiments presented in the current draft are not convincing enough to me. See questions below.\n\n1. Instead of continue training from the tinyllama model, I think training models from scratch using the 50B tokens budget will make the results in this paper more convincing. This is because you can get rid of the ripple effect of the originally used rope positional embeddings. Maybe your models were trying to unlearn rope during the continue training stage?\n2. Apart from testing perplexity scores on the task of language modeling, I highly recommend the authors adding the experiment of needle in a haystack, otherwise I do not know if the models are really using all the tokens.\n3. How do you decide the values of alpha and lambda in section 4.1 and 4.2? In addition, the temperature scaling technique was also used in several other places [1, 2] with explanations of how they did temperature selection.\n\n[1] YaRN: Efficient Context Window Extension of Large Language Models\n[2] Attention Alignment and Flexible Positional Embeddings Improve Transformer Length Extrapolation" }, { "confidence": 3, "rating": 7, "review_id": "Ky79P1tkV6", "review_text": "This paper proposes a mean-based decomposition technique to analyze the formation and effect of positional encodings in LLMs. It then uses these results to propose methods to extend the context window, resulting in models that generalize better to longer texts.\n\n1. This paper is very well-written, and the main findings are properly highlighted.\n\n2. This paper not only explains how positional vectors are formed, but also introduces methods to interpolate them based on the findings.\n\n3. Experiments are performed to show that the new methods result in better perplexity scores beyond the context window.\n\nI believe this contribution is novel and insightful enough, and there is no apparent weakness.\n\n1. The legends and graphs in Figure 4 overlap." }, { "confidence": 4, "rating": 7, "review_id": "MWlqHVPBbE", "review_text": "This paper dives into the inner workings of how transformer-based language models handle positional information. By decomposing hidden states into semantic and positional vectors, the authors give a series of analysis about how the positional information are encoded and propagated through layers. I believe this work offers valuable insights for understanding the positional information within the transformer architecture.\n\nVery detailed and clear analysis about how the positional information is encoded and propagated within the transformer architecture, and to the best of my knowledge, I have not seen similar work before. I particularly enjoyed reading Figure 2 and 3, which shows how positional information is propagated through layers and goes beyond the window size, and shows how the manipulation of the positional embedding causally influence the attention patterns, particularly removing the attention sink.\n\nThere are few points that I would like to suggest here to make the paper even stronger.\n\n- Section 4 feels weak and unnecessary. The performance of replacing the positional vector, if my understanding is correct, seems to be much worse than Dynamic NTK. Given the current mainstream approach is modifying the base of Rope (like YaRN), which is much easier than the approach proposed by this work, I do not think this work’s proposed context extension will be accepted by mainstream model builder.\n- That being said, I think the in-depth analysis of the positional embeddings are strong enough for me to give an acceptance (I learned a lot from it), so **I would strongly suggest removing the content of section 4, and use its space for more experimental analysis of the positional vectors**\n\nThere are a few important problems that I believe will receive the communities’ attention and worth being addressed:\n\n- Although this paper shows the positional information can propagate through layers (Figure 2), in practice, many work found that models with window attention cannot pass the needle in a haystack test, and this is why Mistral 1.5 changed its attention back to full attention. It would be insightful if the authors can discuss the relationships between positional information and needle-in-a-haystack performance (because needle in haystack is what makes long-context models useful), i.e., why window attention cannot pass needle in haystack even it does have the correct positional information?\n- This paper’s analysis is restricted on TinyLLaMA, but TinyLLaMA is not a widely used open-source model, thus casting the doubt whether this discovery of this paper will hold for other model families, particularly mainstream open-weight models like LLaMA 3, Mistral, QWen or Yi. I would strongly suggest the authors verify the behavior of positional embedding on either LLaMA 3, Mistral, QWen, or Yi.\n\nCurrently I’m given a borderline accept, and I will consider increasing my scores if the authors could either (1) discuss the relationship between positional vectors v.s. needle-in-a-haystack or (2) verify that the properties of positional vectors hold for LLaMA 3, Mistral, QWen or Yi (any 2 out of the 4).\n\nsee the above weakness section" } ]
zeNwOAcb4q
Estimating Transition Matrix with Diffusion Models for Instance-Dependent Label Noise
Learning with noisy labels is a common problem in weakly supervised learning, where the transition matrix approach is a prevalent method for dealing with label noise. It estimates the transition probabilities from a clean label distribution to a noisy label distribution and has garnered continuous attention. However, existing transition matrix methods predominantly focus on class-dependent noise, making it challenging to incorporate feature information for learning instance-dependent label noise. This paper proposes the idea of using diffusion models for estimating transition matrix in the context of instance-dependent label noise. Specifically, we first estimate grouped transition matrices through clustering. Then, we introduce a process of adding noise and denoising with the transition matrix, incorporating features extracted by unsupervised pre-trained models. The proposed method enables the estimation of instance-dependent transition matrix and extends the application of transition matrix method to a broader range of noisy label data. Experimental results demonstrate the significant effectiveness of our approach on both synthetic and real-world datasets with instance-dependent noise. The code will be open sourced upon acceptance of the paper.
https://openreview.net/pdf/ae76b3fa16b296da6130bdfad77eb462fe20fc45.pdf
[ { "confidence": 4, "rating": 3, "review_id": "h8Qfdux6HK", "review_text": "This paper deals with the problem of supervised learning from noisy labels, where the label noise is modeled using instance-dependent label transition probability matrix. Mainly, this work attempts to leverage conditional diffusion model in order to obtain a generative model of transition matrix conditioned on the sample features. To that end, this work first generate pseudo paired samples $( x_i, T_i )_{i=1}^N$ using existing method (VolMinNet). Secondly, a conditional diffusion model is trained that generates $T_i$ given $x_i$. Finally, the classifier is trained taking into consideration the estimated transition matrix from the diffusion model.\n\n1. The problem considered is of interest to the broad ML community\n2. Adequate experimental settings, baselines, and ablations are provided for numerical validation.\n3. The attempt to apply diffusion model is novel.\n\n1. The technical soundness of the proposed method is questionable. Essentially, the proposed method trains a conditional diffusion model using paired samples $(x_i, T_i)$. If we consider the true transition matrix as $T(x)$ for a sample $x$, then the idea of the proposed method is to train a conditional generative model $p( T(x) | x )$. There are several issues with this attempt and the proposed implementation:\n (a) The authors use pseudo transition matrix $T_i$ generated from a sample-independent method (VolMinNet). $T_i$ only depends upon the cluster assignment of $x_i$. The diffusion model, at best, can approximate the conditional distribution $p( T_i | x_i )$. This has no clear relation to $p(T(x) | x)$. Therefore, in principle, the transition matrix generated by the trained diffusion model cannot be better than that returned by VolMinNet.\n (b) Second, the transition matrix is modeled as a deterministic function of sample, i.e., only one $T(x)$ exists for a given $x$. Therefore, it does not make sense to learn a generative model for $p(T(x) | x)$, since it is a degenerate distribution (probability of all other matrices should be zero except the true $T(x)$). \n\n2. Another hint at why the proposed method should be limited by the pseudo paired sample distribution is that the diffusion model training part (which is ultimately used as transition matrix estimator) does not require available noisy labels. Hence, no extra information can be extracted about the true transition matrix $T(x)$ beyond the information captured by the pseudo paired samples $(x_i, T_i)$. \n\n2. It is unclear where the performance gain in empirical results is coming from. The manuscript does not provide any intuitive or theoretical explanation to justify the quality of their estimator. Moreover, no rationale for the algorithm design is provided.\n\nPlease provide a rebuttal for each of the points in weaknesses section." }, { "confidence": 4, "rating": 2, "review_id": "RuqywPTT8L", "review_text": "This paper focuses on the estimation of the transition matrix with instance-dependent label noise. They used a diffusion model for this estimation. By applying a diffusion process to the transition matrix, the diffusion model is trained to generate transition matrices from a prior distribution. The instance-wise generated transition matrix is then used to train the classifier with a forward cross-entropy loss. The improvement of the method is demonstrated by experiments on benchmark and real-world datasets.\n\nThe instance-dependent label noise scenario is a challenging task.\n\n* The reason for generating the transition matrix using a diffusion model is unclear.\n * The instance-dependent transition matrix is the target to be estimated, but it is uncertain what role training a diffusion model to generate the transition matrix without a fixed target.\n * In addition, as mentioned by the authors, the transition matrix must be satisfied: the entries are greater than 0, the row sum is to be 1, and the diagonal entry is typically the largest. However, these considerations have not been taken into account in the construction of the diffusion process. Although a transformation method is proposed in Section 3.4, there is no discussion of how this affects the training of the diffusion model.\n\n* Pre-trained features are fed into the diffusion network, but their impact on the diffusion process has not been analysed. This could be seen as providing additional conditional information during the diffusion process, implying that this diffusion model might be a conditional diffusion model. It would be better to discuss these consideration.\n\n* In Algorithm 3, it appears that the diffusion model is trained in order to generate the initialized $T_i$. I wonder if the desired training is for the initialized $T_i$ to be generated perfectly as is. This could lead to a transition matrix that might not contain instance-dependent information, raising questions about the mechanism by which diffusion training introduces variance.\n\n* The diffusion training seems to take a considerable amount of time, which needs to be analysed. If it takes a long time, the performance improvement may not be significant in comparison.\n\nPlease see the Weaknesses part." }, { "confidence": 5, "rating": 3, "review_id": "zGrjwWtnAN", "review_text": "In this work, the authors proposed an approach to estimate the instance-dependent transition matrix in order to reliably learn from noisy labels. The idea is to use a condition diffusion model to estimate the transition matrix by using the pretrained extracted image features as the conditions. Once the transition matrices are estimated, the classifier is learned through the corrected cross entropy loss. Experiments are presented to compare the performance of the approach with other baselines using both synthetic and real noisy datasets.\n\nThe paper is easy to read and notations are clearly stated\n\nThe main weakness is the lack of support and discussion in substantiating the idea. Experiments are insufficient to support the claims.\n\nQuestions:\n\n1.\tA key concern is that estimation of the transition matrix is heavily dependent on the initializations given to the diffusion model learning. The diffusion model intuitively tries to approximate the distribution of its inputs through its forward and reverse process. In the traditional setting, the original image features is the input. But in your case, the initializations estimated through clustering and volmin optimization are the inputs. This part is quite unclear how does it help learn the true instance-dependent transition matrices.\n\n2.\tIn the experiments, in Table 4, I do not see the ablation study with just using the initialized transition matrix and training the classifier, which is important to see the effect of the diffusion model-based learning for the TM. The ablation study corresponding to “w/o diffusion” says that it is using the pre-trained model. \n\n3.\tExperiment results all look good compared to the baselines, but I do not see any clear intuition/discussion to substantiate this idea of instance-dependent transition matrix estimation" } ]
zcEPOB9rCR
Bridging Geometric States via Geometric Diffusion Bridge
The accurate prediction of geometric state evolution in complex systems is critical for advancing scientific domains such as quantum chemistry and material modeling. Traditional experimental and computational methods face challenges in terms of environmental constraints and computational demands, while current deep learning approaches still fall short in terms of precision and generality. In this work, we introduce the Geometric Diffusion Bridge (GDB), a novel generative modeling framework that accurately bridges initial and target geometric states. GDB leverages a probabilistic approach to evolve geometric state distributions, employing an equivariant diffusion bridge derived by a modified version of Doob's $h$-transform for connecting geometric states. This tailored diffusion process is anchored by initial and target geometric states as fixed endpoints and governed by equivariant transition kernels. Moreover, trajectory data can be seamlessly leveraged in our GDB framework by using a chain of equivariant diffusion bridges, providing a more detailed and accurate characterization of evolution dynamics. Theoretically, we conduct a thorough examination to confirm our framework's ability to preserve joint distributions of geometric states and capability to completely model the underlying dynamics inducing trajectory distributions with negligible error. Experimental evaluations across various real-world scenarios show that GDB surpasses existing state-of-the-art approaches, opening up a new pathway for accurately bridging geometric states and tackling crucial scientific challenges with improved accuracy and applicability.
https://openreview.net/pdf/f266817fbebd23596b1820451d7726da8a9cd128.pdf
[ { "confidence": 3, "rating": 6, "review_id": "YhMppRX4vs", "review_text": "The paper introduces the Geometric Diffusion Bridge (GDB), a novel framework designed to generate the evolution of geometric states in geometric (coordinate) systems. GDB uses a diffusion bridge connecting initial and target geometric states with equivariant transition kernels, preserving symmetry and joint state distributions. Furthermore, GDB can use a chain of equivariant diffusion bridges to leverage trajectory data for more accurate dynamic modeling.\n\n- The presentation of theorems in Section 3.1 is clear and straightforward, establishing a solid theoretical foundation for GDB. The authors effectively derive theorems and integrate them with point cloud states.\n- GDB demonstrates strong performance across various tasks, including QM9, Molecule3D, and OpenCatalyst IS2RS.\n\nI have no complaints regarding the technical and experimental sections, as they are well-written. However, I wonder existing works, such as [1] and [2], also use diffusion bridges over molecular data. What advantages does your approach have over theirs?\n\n[1] Diffusion-based Molecule Generation with Informative Prior Bridges. Lemeng Wu, et al. NeurIPS 2022.\n\n[2] DiSCO: Diffusion Schrödinger Bridge for Molecular Conformer Optimization. Danyeong Lee, et al. AAAI 2024.\n\nSee weaknesses." }, { "confidence": 3, "rating": 5, "review_id": "2TQfyxw16c", "review_text": "This paper proposes a generative model for bridging initial and target geometric states using diffusion bridge. This work introduces an equivariant diffusion bridge based on equivariant transition kernels for symmetry constraints. The proposed method was validated on diverse settings including simple molecules and adsorbate-catalyst complex, outperforming previous MLFF baselines.\n\n- The motivation of using diffusion bridge to bridge initial and target geometrical states is reasonable. \n- Using diffusion bridge model for equilibrium state prediction and structure relaxation is novel to the best of my knowledge, and the paper shows that GDB significantly outperforms previous methods with diverse datasets.\n- Equivariant design of bridge process is based on solid theory.\n- The paper is well written except for some missing relevant works on diffusion bridge.\n\n- Related works on diffusion bridges or diffusion mixtures were not discussed. Diffusion bridges has been studied in [1,2,3,4] with applications to molecules, graphs, point clouds, and images, and more recent works have studied general framework for diffusion bridges [5, 6] which is worth discussing. While GDB has a contribution for using diffusion bridges in new tasks, discussing related works and clarifying the novel contributions is necessary in particular for strengthening the contribution of this work.\n- Contribution seems limited as using diffusion bridge as generative modeling was already studied [1,2,3,4], in particular deriving diffusion bridges using Doob's h-transform. Designing an equivariant diffusion process (not necessarily bridge) specifically in SE(3) group has been covered in [7,8, 9]. What is the difference of designing equivariant diffusion bridges compared to equivariant diffusion processes?\n\n[1] Peluchetti, Diffusion Bridge Mixture Transports, Schrodinger Bridge Problems and Generative Modeling, JMLR 2023\n[2] Liu et al., Learning Diffusion Bridges on Constrained Domains, ICLR 2023\n[3] Wu et al., Diffusion-based Molecule Generation with Informative Prior Bridges, NeurIPS 2022\n[4] Jo et al., Graph Generation with Destination-Predicting Diffusion Mixture, arXiv 2023 \n[5] Albergo et al., Stochastic Interpolants: A Unifying Framework for Flows and Diffusions, arXiv 2023\n[6] Shi et al., Diffusion Schrodinger Bridge Matching, NeurIPS 2023\n[7] Xu et al., GeoDiff: a Geometric Diffusion Model for Molecular Conformation Generation, ICLR 2022\n[8] Xu et al., Geometric Latent Diffusion Models for 3D Molecule Generation, ICML 2023\n[9] Yim et al., SE(3) diffusion model with application to protein backbone generation, ICML 2023\n\n- What is the reason for using deterministic process (i.e., probability flow ODE) instead of the original stochastic process? Does ODE results in better performance?\n- Is GDB scalable to geometric states of high dimensions? While analysis on this may not be necessary, it could strengthen the work." }, { "confidence": 4, "rating": 3, "review_id": "Cw5ZHsPvpj", "review_text": "This paper proposes a type of diffusion model that captures the evolution of geometric states. The model is characterized by a diffusion SDE that couples the initial state with the target state, in the middle of which trajectory guidance is enabled when such data present. The framework is designed to yield equivariant density similar to other geometric diffusion models. Experiments on equilibrium state prediction with or without trajectory data have been performed to verify the applicability of the proposed approach.\n\n1. The distinction between existing works has been elaborated in Table 1, which is clear.\n\n2. The method is designed with an option to leverage additional trajectory data, which is quite interesting.\n\n1. The experimental setup and comparison with baselines on equilibrium state prediction is a bit troublesome which requires more clarification or additional comparisons. Please refer to Q1.\n\n2. The presentation is a bit unclear. Please refer to Q2.\n\n3. Additional baselines may be considered. The baselines selected in the paper are not closely connected to the proposed approach. See Q3.\n\n4. Missing ablation studies. In the current shape it is unclear where the performance gain comes from. See Q4.\n\nQ1. The evaluation protocol on QM9 and Molecule3D, especially to compare with direct prediction approaches, is not a common practice. A more convincing benchmark protocol would be to compare with methods such as GeoDiff [1] on molecule generation tasks since they are also generative models. Since the paper is positioned to tackle generative modeling, the experiments should also be designed to align with the goal.\n\nQ2. Could the authors provided detailed sampling algorithm this approach adopts? If the model uses sampling approach similar to other diffusion models, there should be related discussions on sampling steps/sampling time the method consumes.\n\nQ3. A more reasonable baseline would be to directly apply existing bridge models (e.g., [2]) to the current task by switching the backbone to the one this paper adopts. This would help the audience understand the unique contribution of this work since both bridge models and equivariant (geometric) diffusion models have been proposed in literature.\n\nQ4. Ablation studies such as investigating the importance of preserving equivariance of the density modeled should be included. This would help justify the necessity of the proposed components.\n\n\n[1] Xu et al. GeoDiff: a Geometric Diffusion Model for Molecular Conformation Generation. In ICLR'22.\n\n[2] Zhou et al. Denoising Diffusion Bridge Models. In ICLR'24." }, { "confidence": 2, "rating": 6, "review_id": "PxQHToTFl4", "review_text": "In this paper, the authors introduce a Geometric Diffusion Bridge (GDB) framework, which aims to predict the evolution of geometric states in complex systems accurately, crucial for fields such as quantum chemistry and material modeling. Traditional methods face computational challenges, while deep learning approaches lack precision and generality. The authors use Doob’s h-transform to construct an equivariant diffusion bridge. By applying Doob’s h-transform, the authors adjust the SDE to ensure that the process starts from an initial geometric state and is conditioned to reach a target geometric state. This ensures that the transformed process respects the symmetry constraints of the geometric states, leading to more accurate and physically meaningful predictions.\n\n+ The framework utilizes an equivariant diffusion bridge derived from a modified Doob’s h-transform. This ensures that the diffusion process respects symmetry constraints, making the predictions more robust and reliable.\n+ The paper provides a theoretical framework analysis about preserving symmetries and accurately modeling evolution dynamics.\n+ Experimental evaluations show that GDB is better than state-of-the-art approaches in various real-world scenarios, including equilibrium state prediction and structure relaxation tasks.\n+ The framework achieves significant error reduction compared to strong baseline models, particularly in challenging tasks such as structure relaxation in the Open Catalyst 2022 dataset\n\n- The framework, especially when leveraging trajectory data, might introduce significant computational overhead. The simulation-free matching objective is designed to be efficient, but the overall framework’s computational demands might still be high\n- Some mathematical notations and definitions in the paper could be made clearer. For instance, explicitly defining all variables and functions used in the modified Doob’s h-transform and constructing equivariant diffusion bridges would improve readability and understanding.\n\nSee above" } ]
zb8jLAh2VN
Inference of Neural Dynamics Using Switching Recurrent Neural Networks
Neural population activity often exhibits distinct dynamical features across time, which may correspond to distinct internal processes or behavior. Linear methods and variations thereof, such as Hidden Markov Model (HMM) and Switching Linear Dynamical System (SLDS), are often employed to identify discrete states with evolving neural dynamics. However, these techniques may not be able to capture the underlying nonlinear dynamics associated with neural propagation. Recurrent Neural Networks (RNNs) are commonly used to model neural dynamics thanks to their nonlinear characteristics. In our work, we develop Switching Recurrent Neural Networks (SRNN), RNNs with weights that switch across time, to reconstruct switching dynamics of neural time-series data. We apply these models to simulated data as well as cortical neural activity across mice and monkeys, which allows us to automatically detect discrete states that lead to the identification of varying neural dynamics. In a monkey reaching dataset with electrophysiology recordings, a mouse self-initiated lever pull dataset with widefield calcium recordings, and a mouse self-initiated decision making dataset with widefield calcium recording, SRNNs are able to automatically identify discrete states with distinct nonlinear neural dynamics. The inferred switches are aligned with the behavior, and the reconstructions show that the recovered neural dynamics are distinct across different stages of the behavior. We show that the neural dynamics have behaviorally-relevant switches across time and we are able to use SRNNs to successfully capture these switches and the corresponding dynamical features.
https://openreview.net/pdf/a72b19b658dbec5e7192f749e9871e5279caf5ab.pdf
[ { "confidence": 4, "rating": 5, "review_id": "1VAJQ1p6ad", "review_text": "This paper develops a switching RNN (SRNN) framework to model neural activity. It builds up on switching linear dynamical system models that are used in neuroscience to segment and extract underlying dynamics of observed neural activity. The different segments corresponding to unique dynamics often reflect distinct behavioral states. The crucial novelty of this work is that they allow the dynamics to be non-linear, unlike SLDS and rSLDS, making the model more expressive. They fit these models using VI using an inference network. Finally, they apply SRNN to synthetic data, as well as 3 distinct neural datasets and show that it outperforms SLDS and rSLDS on segmenting activity into behavioral modules where each module corresponds to distinct dynamics. They visualize these underlying dynamics, and also evaluate their fitted model on predicting future neural activity.\n\n1. As we move towards large-scale neural datasets, it is crucial to scale model complexity in order to fully harness these datasets. This paper makes a step in that direction by allowing for non-linear dynamics, while also providing an appropriate fitting approach.\n2. The experiment section is extensive, and I appreciate the application to multiple neural datasets. I particularly found the results on the decision-making dataset to be most impressive. \n3. The literature review is thorough, and the authors do a good job of situating their work in the context of other related studies.\n\n1. The authors mention switching nonlinear dynamical systems (Dong et al. 2020), and discuss how their work differs from Dong et al. I think it is important to either provide an experimental comparison to SNLDS or a justification for why these existing models are insufficient to explain neural datasets, as the main novelty/motivation for SRNN and SNLDS is very much related (also noted by the authors in the paper). More on this in the question section.\n\n2. Behavioral segmentations are somewhat subjective in nature, and while I can see that in the experiments shown here they make sense, in a real world setup we may want to infer the number of such segmentations from the data. Here the authors set the number of discrete states to the # of true behavioral states, however this might not be known in practice. Furthermore, there might be distinct sets of dynamics within one behavioral state due to other reasons not totally explicit from behavior. From the current set of results, it is not clear if SRNN is capable of inferring the # of underlying states. I will elaborate more in the questions section on this as well. \n\n3. I also think the paper will benefit from some editing by the authors. The references are not formatted properly, commas are missing. The referencing to supplementary figures doesn't seem to be working, it links back to figures in the main text. I also think the authors can trim some of the background, such as the section on VI, in favor of explaining some of the experiments such as the Lorenz attractor setup in more detail.\n\n4. While I appreciate the extensive experiments, I find it hard to reconcile some of the results. It seems like in some of the plots (Fig 3C/D, Fig 5C/D) prediction + reconstruction performance across all models is similar. However, the discrete states being inferred look hugely inaccurate for SLDS and rSLDS. I wonder if the authors have thoughts on why this happens.\n\n1. I feel it is crucial to understand the pros and cons of SNLDS vs SRNN, and understand if the differences between the two models are consequential in modeling neural data. For example, I understand some of the modeling differences such as the dependence of discrete state transitions on the data in SNLDS vs the previous continuous state in this paper, but I am not sure if one of the two is a better assumption. Additionally, I am also struggling to understand why can SNLDS not be used for prediction?\n\n2. In the real-world experiments, were the number of discrete states across all models being compared set to the true number of behavioral states? I would be curious to see how results vary across different # of discrete states across these models, perhaps MSE on prediction or ELBO vs # of discrete states is a possible way to show this. This is for two mains reasons:\ni. In a new dataset we might not know ground-truth behavioral segmentations, and will want to be able to infer the number of such segmentations from the data. Hence, it would be interesting to see if SRNN can be used to do so.\nii. The fact that rSLDS does fine it predicting data but infers discrete states inaccurately makes me wonder if it is clustering data differently, perhaps collapsing 2 states into one or further segmenting one behavioral state into multiple slightly different states." }, { "confidence": 4, "rating": 6, "review_id": "mDNXRHhvug", "review_text": "The authors develop a new class of probabilistic nonlinear state space models called switching RNNs. In essence, this extends the well-known switching linear dynamical system (SLDS) model to switch between nonlinear dynamics governed by a stochastic RNN.\n\n* The results shown in panels A of Figs 3, 4, and 5 are nice and convincing.\n\n* Like many other deep learning based approaches, the model is not particularly interpretable. For example, panel F in Figs 3, 4, and 5 shows 2D flow fields for the different hidden states, but the RNN hidden state is 16-dimensional. Here the authors have used PCA to attempt to find a reasonable 2D flow field, but I know from experience that this has the potential to very poorly capture the true dynamics of the system. Intuitively, even small variance dimensions can matter a lot if the flow field changes rapidly along that dimension.\n\n* There are many tunable parameters in this model (e.g. number of continuous and number of discrete states). It is unclear how to choose these on datasets without ground truth, or at least good educated guesses.\n\n* Related to above, I worry a lot about the identifiability of this model. A nonlinear RNN without discrete switching can already model any flow field if given enough units. Thus a model with many continuous states (e.g. $P=128$) but zero discrete states may perform equally well to a model with few continuous states (e.g. $P=16$ or $P=8$) but a handful of discrete states. How would one then go about choosing between these models? Adding discussion or ideally some sort of mathematical analysis regarding the statistical identifiability of the model would be very helpful.\n\n* Equation (2) seems wrong to me. The nonlinearity $f(\\cdot) = \\tanh(\\cdot)$ doesn't make sense here since $p(z_t \\mid z_{t-1}, h_{t-1})$ should be a positive number. Perhaps you meant to use a softmax nonlinearity here?\n\n* Equation (2) also seems to suggest that the transition probability only depends on $h_{t-1}$ and not $z_{t-1}$. Is this correct?\n\n* Related to the above, it wasn't obvious to me whether you are allowing the continuous hidden state to impact the transition for the discrete state. Essentially I am wondering if your model is analogous to the switching LDS (where the continuous hidden state doesn't impact the transition statistics) or instead the recurrent switching LDS (where the continuous state does impact the discrete transition probabilities). In Figure 1B, what is the meaning of the red dashed arrow? Does that carry any difference to the black arrows?\n\n* Are the good results shown in panel A of Figs 3, 4, and 5 due solely to differences in the initialization procedure across models? (see top of page 5)\n\n* Regarding identifiability, what happens if you run the model multiple times from different random seeds? Do you recover the same flow fields and fixed point structure?" }, { "confidence": 4, "rating": 3, "review_id": "doTdTsAQtM", "review_text": "The authors propose to model time series neural population activity using switching recurrent neural networks. The generative model includes discrete latent states\n\nThe proposed method does appear to outperform related switching linear dynamical systems approaches in certain contexts.\n\nHigh-level:\n- The contribution beyond other switching nonlinear dynamical systems models is not clear. Such models include the cited Dong et al., 2020, as well as Karniol-Tambour et al., ICLR 2024. If there is a contribution beyond these works, the authors should compare against those existing related methods.\n- The authors do not demonstrate an ability to automatically determine the appropriate number of discrete states. One approach to this might be \"co-smoothing\" (see Yu et al., Gaussian Process Factor Analysis, 2009).\n\nDetails:\n- The mathematical details and notation are often unclear. For example, equation 2 does not appear to be a valid probability distribution, given the description that f(.) = tanh(.). Shouldn't this instead be a categorical distribution or similar? Relatedly, f is also used in equation 8, but from the context it appears to denote something entirely different.\n- The authors should more clearly describe the cross-validation techniques for used for each dataset. The blanket statement in the intro to Section 4 (\"On each dataset, we do N-fold cross-validation, where N equals to the number of conditions, sessions, or subjects in the dataset\") obscures how cross-validation was actually applied in each instance.\n\n- Are the predictions in Figure 2 cross validated (eg., using the technique described in section 3.3?\n- In Fig 3, are the authors modeling single-trials or condition averages (ie PSTHs)? This should be addressed. It looks like they are predicting condition averages since the \"true\" neural activity in 3E takes on continuous values (rather than indicating spike times or binned spike counts).\n- Why does SRNN perform worse than SLDS and rSLDS in Figure 5CD?" }, { "confidence": 3, "rating": 6, "review_id": "fBuXPJ3x0b", "review_text": "The paper proposes switching recurrent neural networks (SRNN), which allow the RNN weights to switch across time. The RNN weights switch based on a latent Markovian process of discrete states. The authors apply SRNN to a simulated dataset following the Lorenz attractor and three real-world neural recordings.\n\n- Clarity: The authors clearly explain the problem, related work, and methodology with well-written equations and easy-to-understand figures. \n\n- Extensive use of datasets: The paper applies SRNN to numerous real-world neural datasets, illustrating the effectiveness of SRNN in accurately segmenting different datasets in an unsupervised fashion.\n\n- Lack of comparison with other methods:\nThe paper compares SRNN to (r)SLDS models. However, there exist many other models for unsupervised segmentation. For example, ARHMMs and their extensions are simple yet powerful and interpretable models for segmentation [1, 2]. The authors should cite and consider comparisons with multiple model classes.\nIn addition, the paper notes in line 103 that SRNNs have the most comparable structure to SNLDS, but the authors do not make comparisons. The authors should also cite and compare with [3], which has switching nonlinear dynamics.\n\n[1] Wiltschko, A. B., Johnson, M. J., Iurilli, G., Peterson, R. E., Katon, J. M., Pashkovski, S. L., ... & Datta, S. R. (2015). Mapping sub-second structure in mouse behavior. Neuron, 88(6), 1121-1135.\n\n[2] Lee, H. D., Warrington, A., Glaser, J., & Linderman, S. (2023). Switching autoregressive low-rank tensor models. Advances in Neural Information Processing Systems, 36, 57976-58010.\n\n[3] Karniol-Tambour, O., Zoltowski, D. M., Diamanti, E. M., Pinto, L., Tank, D. W., Brody, C. D., & Pillow, J. W. (2022). Modeling communication and switching nonlinear dynamics in multi-region neural activity. bioRxiv, 2022-09.\n\n- Experiments:\nThe simulated experiment with the Lorenz attractor shows that SRNN does well when it has access to noiseless observations with known state dimensions. In order to have a more convincing simulated experiment, the authors could consider the following. First project the Lorenz attractor to a higher dimensional space and add additive Gaussian noise. Then fit SRNN (and other compared models) to the dataset to see if it can recover the Lorenz attractor and true latent state dimension (using some metric on held-out data). Another simulated experiment could be done with a dataset that simulates the NASCAR track [1,2].\n\n[1] Linderman, S. W., Miller, A. C., Adams, R. P., Blei, D. M., Paninski, L., & Johnson, M. J. (2016). Recurrent switching linear dynamical systems. arXiv preprint arXiv:1610.08466.\n\n[2] Lee, H. D., Warrington, A., Glaser, J., & Linderman, S. (2023). Switching autoregressive low-rank tensor models. Advances in Neural Information Processing Systems, 36, 57976-58010.\n\n- What are some failure modes of the model? Does extra flexibility mean that SRNNs need more data than simpler models such as ARHMMs or SLDSs? I'm curious how the SRNNs would do in a low-data regime (e.g., sample a small amount of dataset from an SLDS).\n\n- Have you tried fitting the model to datasets other than neural data? Based on how well SRNNs do in segmenting the neural datasets, I'm curious how SRNNs would do on other types of datasets, such as mouse behavioral dataset [1].\n\n[1] Wiltschko, A. B., Johnson, M. J., Iurilli, G., Peterson, R. E., Katon, J. M., Pashkovski, S. L., ... & Datta, S. R. (2015). Mapping sub-second structure in mouse behavior. Neuron, 88(6), 1121-1135.\n\n- How are the hyperparameters selected? Based on how long it takes to fit each SRNN to the datasets, I wonder if it is feasible to sweep over the hyperparameter space." } ]
zaXuMqOAF4
Mesa-Extrapolation: A Weave Position Encoding Method for Enhanced Extrapolation in LLMs
Large language models (LLMs), although having revolutionized many fields, still suffer from the challenging extrapolation problem, where the inference ability of LLMs sharply declines beyond their max training lengths. In this work, we conduct a theoretical analysis to better understand why No Position Encoding (NoPE) fails outside its effective range, as well as examining the power of Position Encoding (PE) in this context. Our findings reveal that with meticulous weave position, PE can indeed be extended beyond effective range. Our theorems establish that LLMs equipped with weave PE can achieve improved extrapolation performance without additional cost. Furthermore, we introduce a novel weave PE method, Mesa-Extrapolation, which utilizes a chunk-based triangular attention matrix and applies Stair PE to manage the final chunk. This method not only retains competitive performance but also offers substantial benefits such as significantly reduced memory demand and faster inference speed. Extensive experiments validate the effectiveness of Mesa-Extrapolation, demonstrating its potential as a scalable solution to enhancing LLMs’ applicative reach.
https://openreview.net/pdf/aadea357c4db6663d79a4fd8524c3afe7c77650c.pdf
[ { "confidence": 5, "rating": 5, "review_id": "UzSBA20dDB", "review_text": "The paper conducts a theoretical analysis to help understand the No Position Encoding. Also, the paper proposes weave position encoding to achieve improved extrapolation performance without additional cost. Also, the paper introduces the weave PE method, Mesa-Extrapotion, which recalculates the position ID to reduce the gap between training and inference. Finally, the paper conducts experiments to prove the effectiveness of Mesa-Extropoation.\n\n* The presentation of wave PE, Star PE, and Mesa-Extrapolation is clear. The author also provides the details of wave PE, Star PE and Mesa-Extrapolation to help understand the concepts\n* The author conducts experiments to prove the effectiveness of the proposed Mesa-Extrapolation.\n* The author also further analyzes the Latency & Memory Usage of the proposed Mesa-Extropoaltion.\n* The paper discusses the limitations for further discussion.\n\n**Major Concerns**: It seems that the proposed method Star PE is the same as Self-Extend LLM [1]. If possible, I sincerely hope that the author can address the following concerns:\n* **Concern 1**: The Figure 1 Star PE implementation result does not match the Equation proposed in Section 4.1 Page 5. In Figure 5, when t-i is 5, the implementation result of Star PE is 4. However, according to the Equation proposed in Section 4.1 Page 5, the implementation result should be N+ $\\lceil (t-i-N)/E \\rceil$=4+$\\lceil (5-4)/2 \\rceil$=5. Hence, to match the implementation result of Figure 1, the Star PE calculation equation should be N+ $\\lfloor (t-i-N)/2 \\rfloor$.\n\n* **Concern 2**: The Equation of Star PE is almost the same as Self-Extend LLM. When t-i is small than N, both Star PE and Self-Extend LLM employ normal relative distance. When t-i is larger or equal to N, we discuss it below.\n * The Equation of Star PE is N+ $\\lfloor (t-i-N)/E \\rfloor$ (as shown in Figure 1), while N is called the extrapolated position and E is called the extrapolated width, and t-i is the relative distance. \n * The Equation of Self-Extend LLM is $(t-i)//W + (W- W//G)$=$W+ (t-i)//G - W//G$, while W is called neighbor window size and G is called group size. Apparently, when W%G==0, the Equation of Self-Exntend LLM becomes $W+ (t-i)//G - W//G$=$W+ (t-i-W)//G$= $W+\\lfloor (t-i-W)/G \\rfloor$. Then, we change the notation W to N and the notation G to E, we have N+ $\\lfloor (t-i-N)/E \\rfloor$, which is the same as Star PE.\n* **Concern 3**: If possible, could the author compare the performance between Mesa-Extropolation and Self-Extend LLM?\n* **Concern 4**: when the output sequence length $L_{generate} \\gg L_{input}$, will the time cost also becomes O($L_{generate}^2$)?\n\nBased on the above concerns, the paper may need to rethink the major contribution. The proposed Mesa-Extrapolation seems to make sense and may benefit society, while the paper should clarify its original contribution.\n\n\nReference:\n\n[1] Jin, H., Han, X., Yang, J., Jiang, Z., Liu, Z., Chang, C. Y., ... & Hu, X. (2024). Llm maybe longlm: Self-extend llm context window without tuning. arXiv preprint arXiv:2401.01325.\n\nPlease see the above question and concerns in weakness." }, { "confidence": 3, "rating": 8, "review_id": "sV1kiiL1Hx", "review_text": "This paper studies the length extrapolation of LLMs.\n1. It provides a theoretical analysis of why NoPE and PE fail to extrapolate beyond a certain length. Previous work has shown that this failure is related to the explosion of hidden states as positions increase. This paper demonstrates that both NoPE and PE suffer from this hidden state explosion, using a constructive approach to illustrate the existence of Transformer weights.\n2. It proposes weave PE, a simple adaptation of PE that theoretically addresses the extrapolation issue. It also provides a simple implementation of weave PE, using a chunk-based triangular attention matrix. Then, it demonstrates that the proposed extrapolation scheme matches the performance of prior length extrapolation methods, such as Dynamic-NTK.\n\n- Great theory explains the failure of NoPE and PE in length extrapolation.\n- Proposes weave PE, derived from the theoretical analysis, which also works well in practice.\n- Shows good empirical results in passkey retrieval, language modeling, and summarization.\n\n1. Methodological comparison with $\\Lambda$-Attention\n\nThe proposed Stair PE resembles the $\\Lambda$-attention of LM-Infinite & Streaming-LLM, yet with differences in 1) the additional attention at the bottom, and 2) a different length extrapolation scheme, Meta-Extrapolation. In the experiments, Meta-Extrapolation significantly outperforms LM-Infinite & Streaming-LLM. Could the authors provide the intuition behind these empirical gains?\n\n---\n2. Empirical comparison with Dynamic-NTK\n\nDynamic-NTK outperforms Meta-Extrapolation on the summarization task for mid-lengths of 7-11k, while Meta-Extrapolation shows better performance on summarization for shorter lengths of 4-6k and better language modeling fluency for lengths greater than 11k. Could the authors provide the intuition behind these results?\n\n---\n3. Relation between input sequence length $T$ and effective length $M$\n\nThe theorems only show the existence of an effective length $M$, but do not provide intuition on the scale of $M$, such as the ratio over the input length $M / T$. Could the authors provide some intuition on this? If I understand correctly, $M$ is set from the construction of the Transformer weights, so can it be controlled to an arbitrarily large number?\n\n---\nEditorial comments\n\n- The fonts of the figures and tables are too small. Please make them more readable.\n- Some parts of the writing are mechanical. For example, lines 116-120 do not provide meaningful information. It would be great to discuss the implications of the theorems in natural language. For instance, both theorems state the failure of length extrapolation in NoPE and PE, rather than just \"revealing the internal mechanism of extrapolation.\"\n\nSee Weaknesses." }, { "confidence": 4, "rating": 4, "review_id": "oDAQcreLt8", "review_text": "The paper proposes a positional embedding scheme to address the extrapolation issue: train on short sequences, evaluate on longer sequences. Authors propose a theoretical framing of the positional embeddings contribution to attention. They apply their analysis to NoPE (No Positional Embedding) and to standard PE, and RoPE. They propose the Mesa-Extrapolation idea where input tokens are organized so that attention is paid to nearby tokens and those at other key positions. Authors validate their findings with empirical evidence on several benchmarks and applications.\n\nThe paper is about a very relevant topic which has attracted a lot of attention lately. The paper proposes a simple approach to solve the problem which seems to be easy to adapt to different positional embedding models. Some of the numerical experiments are encouraging.\n\nThe theory part of the paper is hard to read and I am not sure about its usefulness. Result appear hand-wave-y and vaguely stated. For example the definition of the threshold H in the Assumption is surprising (see questions). Numerically, experiments on language modeling and Summary of Tasks do not seem to show the method's claims.\n\n1. Can authors explain the threshold definition: \"When o > H, LLM extrapolates successfully. Once o < H, LLM extrapolation\nfails.\" Is there a typo and inequalities are reversed?\n\n2. In Fig 2, why dim 1 & 6 are of interest?" }, { "confidence": 4, "rating": 6, "review_id": "n2N3eksheo", "review_text": "This paper introduces a new LLM length extrapolation method, called Mesa-extrapolation, which utilizes a chunk-based triangular attention matrix and applies stair PE. The proposed method is based on theoretical analysis. The paper conducts extensive experiments on passkey, PPL, summarization to demonstrate the effectiveness.\n\n1. The paper provides a theoretical analysis to prove the effectiveness of meticulous weave position with PE for length extrapolation.\n2. The proposed method is efficient and is proved to be effective through extensive experiments.\n\n1. The passkey retrieval experiment is simple, good performance on the passkey is far from a real usable context window. Please consider to add evaluations on Ruler[1] and RepoQA[2]\n\n2. The achieved context window is limited. \n\n\n\n[1] https://arxiv.org/abs/2404.06654\n\n[2] https://evalplus.github.io/repoqa.html\n\n1. Length extrapolation is a necessary technique, but the current extrapolation length is very limited. Considering that there are already many models that have undergone long context window extension, such as phi3-mini-128k, can your proposed method continue to perform length extrapolation on these long context LLMs? If so, it will significantly enhance the impact of your method\n\n2. If I understand correctly, the proposed method is mainly for those with PE. Why is there a need to prove NoPE? Is NoPE your baseline?\n\n3. The proposed Mesa-extrapolation is somehow similar as a variant of \"sliding window attention + attention sinks\". Could the author explain why mesa-extrapolation is theoretically superior compared to sliding window attention and attention sinks?" }, { "confidence": 4, "rating": 6, "review_id": "vMxATPOc3A", "review_text": "The authors propose a weave position encoding method to enhance LLMs’ inference performance when the input context window exceeds the training context window. This method can be integrated into existing pretrained LLMs without additional finetuning. To support their findings, the authors conducted theoretical analyses on the failure reasons of various position encoding methods, including those without position encodings. They demonstrate that the significant shift in the hidden state’s value range, when input token positions exceed the maximum context length, is the cause of this phenomenon.\n\nOne of the strengths of the proposed method is that it can be integrated into existing pretrained LLMs without requiring any additional finetuning. This makes the method highly practical and easy to implement, saving both time and computational resources.\n\nThe method has demonstrated excellent performance in pass key retrieval tasks, showcasing its effectiveness in real-world applications. This indicates that the proposed approach not only works in theory but also delivers tangible improvements in practical scenarios.\n\nThe authors have conducted comprehensive theoretical analyses to understand the failure reasons of various position encoding methods, including those without position encodings. This thorough investigation provides a solid foundation for the proposed method and enhances its credibility\n\nThe proposed position encoding method, while promising, does not consistently improve performance across different tasks. This inconsistency suggests that the method may not be universally applicable or reliable in every context, potentially limiting its overall utility. \n\nAdditionally, the main narrative of the paper emphasizes the method’s ability to handle extrapolation beyond the training context window. However, given the observed variability in improvements, it would be more accurate to adjust the claims to better reflect the method’s performance, providing a more balanced and realistic presentation of the work.\n\nThe caption for Figure 1 is not sufficiently informative.\n\nAdditionally, it is unclear how the failure of an LLM is measured in Section 3.4 and Figure 2. \n\nThe experiments visualizing hidden state values in Figure 2 would have been more effective if conducted on the same task and with the same setup as Figure 3. This alignment would allow for a clearer connection between the findings in Figures 2 and 3.\n\nminor typos:\n\nTheorem 3.2: an simple -> a simple\n\nLine 161: defer to -> refer to\n\nLine 242-243: a significant disruptions" } ]
za9Jx8yqUA
GenRL: Multimodal-foundation world models for generalization in embodied agents
Learning generalist embodied agents, able to solve multitudes of tasks in different domains is a long-standing problem. Reinforcement learning (RL) is hard to scale up as it requires a complex reward design for each task. In contrast, language can specify tasks in a more natural way. Current foundation vision-language models (VLMs) generally require fine-tuning or other adaptations to be adopted in embodied contexts, due to the significant domain gap. However, the lack of multimodal data in such domains represents an obstacle to developing foundation models for embodied applications. In this work, we overcome these problems by presenting multimodal-foundation world models, able to connect and align the representation of foundation VLMs with the latent space of generative world models for RL, without any language annotations. The resulting agent learning framework, GenRL, allows one to specify tasks through vision and/or language prompts, ground them in the embodied domain’s dynamics, and learn the corresponding behaviors in imagination. As assessed through large-scale multi-task benchmarking in locomotion and manipulation domains, GenRL enables multi-task generalization from language and visual prompts. Furthermore, by introducing a data-free policy learning strategy, our approach lays the groundwork for foundational policy learning using generative world models. Website, code and data: https://mazpie.github.io/genrl/
https://openreview.net/pdf/70954c88d0935070ce03152cf1eb67a2f2ac5a2e.pdf
[ { "confidence": 5, "rating": 5, "review_id": "9JrOmW91C7", "review_text": "In this work, the authors propose learning a pixel-based reconstructive world model, and then separately learn networks to convert the representations of a pretrained VLM into the learned world model latent space. By using a VLM trained via contrastive alignment, this essentially enables the projection of both image as well as text inputs into the latent space of the world model, and therefore simple similarity can be used to provide rewards for downstream policy learning.\n\nThis reviewer is a supporter of the idea of unifying the representation spaces of a large-scale pretrained VLM and that of a world model. This author appreciates the benefits: matching behavior of a world model with natural language can enable text-conditioned generalization. The preliminary experiments show promise.\n\n**Originality**: This work appears decently original.\n\n**Quality**: This works quality is acceptable.\n\n**Clarity**: The clarity of the work is acceptable, the core ideas are communicated clearly. However, there are lots of open questions surrounding this work that could be elaborated upon further.\n\n**Significance**: This work appears to be decently significant, as a preliminary investigation in this space.\n\nChief amongst the weaknesses of this work is the limited environments applied to, and also the limited baselines (essentially, the only existing work the authors compare against is VLM-RM). The authors can consider comparing against other forms of text-conditioned policy learning, such as LIV for the kitchen setting, or Text2Reward and similar approaches for the general case. It also seems a strange setup to normalize in-between expert and random, and report results in this way. This reviewer is unaware of prior work that performs evaluations in this way. What is the rationale behind this evaluation strategy compared to what is used in prior work?\n\nDetails about certain components of the model and how they are implemented are sparse. For example, is the aligner a video generative model (text-to-video model)? How is it implemented?\n\nIt is a bit dissatisfying to rely on a corrupted version of vision as a language embedding. It seems strange that the aligner should on one hand be learning to bring language embeddings meaningfully across modalities to the image/video space, which the authors motivate is necessary because of the multimodality gap. However, the authors then treat language embeddings as a noisy corruption of a video embedding - so essentially the training objective for the aligner is essentially a denoising? And rather than bridging a modality gap, the aligner is essentially a denoiser?\n\nWhy do we not learn the reverse direction, where we optimize a world model's latent space that projects into existing VLM space? This design decision is not elaborated upon, but seems more intuitive to this reviewer.\n\nFrom the video demonstrations, on the associated project website, it is rather unclear what is happening. Are Behavior Retrieval videos from expert policies in an offline dataset that are matched with a particular text prompt/input video? What are those text prompts/input videos? It's not clear what the retrieval setup is. For Multitask Generalization, it is also not obvious what the corresponding prompts are. Furthermore, the results for multitask generalization do not seem smooth and natural, despite being simplistic DM_Control environments (especially the case for their proposed simplified Stickman environment) and they are missing Kitchen environments. In the end, it appears that their method is still good as a retrieval technique (retrieving already-achieved expert behaviors in \"Behavior retrieval\") due to the underlying VLM, and is decent at reconstructing video prompts, but still suffers in terms of learning coherent policies (e.g. what is visualized in \"Multitask generalization\"), which is ultimately what is of interest.\n\nFor the video prompts that are decoded, it appears as if almost all of them are rather stationary (with the exception of the cheetah/dog example and the human dancing example) - they collapse to a stationary goal pose. Perhaps this is because the clips are so short (8 frames) that essentially it boils down to pose-matching. It is not obvious that this is that beneficial in supervising motion; so why does this improve upon static image supervision? Indeed, many of the results that are shown learned by the policy are rather stationary and do not have much movement (most are just jitters around a stationary pose). It then begs the question how this approach improves upon just a static goal supervision. However, the authors simultaneously find that in static tasks other methods outperform the authors' approach.\n\nThis reviewer pushes back on the term \"data-free RL\", as there still needs data (and interaction data) to learn their method. This is a very confusing terminology, and honestly the generalization comes from the large-scale pretrained VLM - it would be more appropriate to reuse the terminology of zero-shot reward models or zero-shot policy learning used in prior works across alignment methods (vision-language models are zero-shot reward models for reinforcement learning, [Rocamonde, '23]) and diffusion (text-aware diffusion for policy learning, [Luo, '24]).\n\nThis reviewer really enjoys the work but believes there are many open questions that warrant further explanation. Furthermore, the evaluation suite (environments) and comparison suite (benchmarks) is rather weak. The idea is indeed neat, but the execution leaves much to be desired, and therefore this reviewer believes the work is of borderline quality.\n\nWhy Stickman instead of Humanoid? Humanoid has been able to be solved with pixel-based world models in the past (Appendix A of Dreamerv2). What are the specifications of Stickman and with what criteria was it designed?\n\nWhy did the authors use a simple GRU? Why was a more advanced world model not used, like RSSMs? Was this tested or ablated over?\n\nWhy were there no multi-task generalization experiments performed for Meta-World kitchen?\n\nWould the authors consider Text2Reward and other approaches that learn a reward function as data-free RL, as there do not need additional data to generalize to learning new policies? Alternatively, the data-free RL paradigm sounds like zero-shot generalization for policy learning, which is already offered by VLM-RM and other similar works.\n\nWhy did the authors choose to generate the video demonstrations synthetically, rather than use actual natural video clips? Would the performance not be better when using natural videos which are more in line with what the base VLM was trained on?\n\nHow are the rewards computed using temporal alignment? Essentially, are only the rewards for the most-aligned segments across the target trajectory and the agent used as rewards, and for all other timesteps a 0 reward is provided? This computation seems rather expensive for long-horizon trajectories." }, { "confidence": 3, "rating": 7, "review_id": "Ga9LEYTBXF", "review_text": "The paper looks at a method for leveraging foundation multimodal models for learning world models in RL. They do so by aligning the latent space of a video language model with that of a generative model that can be used for learning in imagination. This is done by training connector-and-aligner networks . The rewards for a task can then be derived by measuring the cosine sim between representations of the states visited by a policy and the states generated by the connector aligner network when it is conditioned on a language-based task prompt. A policy can be optimised to maximise this alignment based reward.\n\nTransferring foundation model knowledge to improve policy learning is an open problem of interest to the community. \n\nThe paper provides a successful recipe for aligning a foundation model with the world model for a specific domain that we want to do policy learning in. \n\nThe paper is written well.\n\nI'm currently being conservative in giving a borderline accept score, since some aspects of the method are not clear to me (I have addressed this in my questions below) - but I will be happy to raise my score after engaging with the authors once they have addressed these questions.\n\n1. I would have expected that simple tasks with clearly distinguishable static end states (such as standing) should have worked equally well with CLIP rewards, however the table shows a big difference between the proposed method and the image-language reward baselines even on those tasks, which leads me to think that the baselines may be missing out on some component that the proposed method has. What could be missing, or is this intuition wrong? \n2. The generations in Fig 6a are actually not accurate at all - many of the poses don’t correspond to the humanoid pose if you look closely and would actually optimize learning to strike the wrong pose if a policy is trained with it.\n\n1. Why is setting b=imagination horizon the same as doing no alignment? (Line 160)\n2. I’m not completely sure how you train the aligner-connector network: is it done by 1) using images collected from the downstream domain (in this case from the mujoco sim), 2) getting their VLM visual embedding and their world model encoder embedding and aligning those? As for the text part, is this done by corrupting the VLM visual embedding (to approximate the language embedding) and aligning it again with the world model encoder? What is the policy used to collect the data and resulting data distribution? I understand that Fig 5 is somehow related to this question but this could be made clearer. For eg. which task’s policy is chosen to collect the data to train the MWFM for the results in the main table (how is this policy related to the task being evaluated)? \n3. The discussion around Figure 5 is not very clear to me - how do we infer that “’run’ data proves more generalizable than ’walk’ or ’stand’ data across tasks“ - the figures suggest that training on ‘stand’ led to the highest rewards for downstream tasks\n4. “This can be explained by the fact that the target sequences that GenRL infers from the prompt are often slightly in motion“ - could you explain why that would be the case (it inferring the closest matching state as one that is in motion)?" }, { "confidence": 4, "rating": 3, "review_id": "xaOe9RDTac", "review_text": "This paper proposes to combine a DreamerV3-style world model with a pretrained vision language model (VLM). By training two small adaptors to align the latent space of the VLM with that of the world model, the aligned representations from the VLM can be used as a reward signal to train agents in the world model.\n\nThe training process consists of two main parts. \n\n1) There is a large offline dataset needed in the environments of interest (prompt and trajectories of states and actions), generated by expert RL agents and a random policy. This trains the world model and the adaptors. Each environment (domain) uses a separate world model.\n\n2) Actor-critic agents are trained purely within this world model’s imagination, a separate policy for each task. The paper shows these agents outperform standard model-free offline RL methods trained on only the large offline dataset. It also shows some effectiveness at generalizing to new tasks within an environment, specified with a new text prompt.\n\n- The core idea of the paper is very nice. There is a lot of interest from the community in working out how to get value from the broad general knowledge locked away in LLMs and VLMs, into RL agents. This paper offers a novel way to attack this – to my knowledge world models have not been used in this context before. \n\n- The results are not dazzling, but they indicate the approach works and it outperforms standard (though perhaps weak) offline-RL baselines. Section 4.2 shows promise in generalizing to knew text prompts in an existing environment.\n\nMy main criticism of the paper is that the narrative oversells what the core work actually supports. I detail examples below. Overall I’d suggest either presenting the work that has been done comprehensively in a more reserved manner, or adding the required work to support the broader claims and experiments. Either way, I think changes would be large enough to require a resubmission. I’m disappointed to not be able to give the paper a higher score as I liked the main idea. \n- The capability of the model to condition on visual goals is presented as a main functionality of the model – featuring in the first figure, the abstract, and throughout the paper. But the only evidence to support this is a very brief and qualitative experiment (Figure 6a). Everything else is conditioned on text. I am of the opinion that conditioning on visuals would likely work, but the paper must present good evidence to support this.\n- Several aspects of the title ‘Multimodal foundation world models for generalist embodied agents’ are misleading. 1) Only one modality is really tested (as in prior point). 2) ‘Foundation world models’ suggested I'd see a single very general world model. But in Appendix D is an important detail -- each environment learns a separate world model, so they are only general or foundational within a specific Mujoco embodiment. This kind of detail is important and should be honestly discussed in the main paper. 3) A ‘generalist agent’ is referred to, but every agent in the paper only performs a single specialist task, there is nothing general about the agent’s themselves.\n- The method is reported as needing ‘no language annotations’ (line 42). This is not true. The large offline dataset requires text prompts accompanying each trajectory.\n- The paper claims to be ‘the first large-scale study of multitask generalization from language in RL’ (line 165), but I can think of others. Language table is the first that comes to mind. \n- One of the motivations for the work is that reward functions can be hard to specify, while language is a more natural form. However, the large offline dataset is generated by using multiple expert agents which need reward functions.\n- ‘Data free RL’ is suggested as a new paradigm for foundation models in RL. I’d argue that this is simply know as zero-shot generalization to most in the community.\n- Main experiments are presented in Table 1. Whilst the offline-RL methods are one comparison point, I’m not sure how comparable they are, since they are all model-free while GenRL is model based. Are there any model-based variants that would be easily considered as baselines? The differences are reflected in the different compute times required – GenRL takes 5 days for world model training +5 hours per policy, while the baselines take 7 hours per policy. This seems like an unfair comparison, especially to withhold the detail to the appendix.\n- Results in Minecraft are briefly mentioned in Section 5. But so few details are given that I am lost as to what it is showing. This should either be removed or full details added. \n- The paper presents a new stickman environment. But details are sparse. The authors have failed to correctly identify this in Checklist Section 13.\n\nSee weaknesses." }, { "confidence": 4, "rating": 5, "review_id": "MUgfCxgHWG", "review_text": "The paper wants to leverage the large-scale pre-training of foundation models trained on internet data to train a world model for embodied agents that generalizes across tasks and domains. This is done by training a world model in the standard way, but in addition training aligner and connector networks that (1) map language embeddings to video embeddings and (2) map video embeddings to world model latent states. At inference time, this allows conditioning the world model on a task language prompt and then training in imagination to learn policies.\n\n- On the website, the reconstruction results from language and video are nice and quite unexpected (I'm unsure why the aligner and connector networks are able to generalize to new prompts) \n- The problem the paper is trying to solve is relevant, especially given the mismatch in data availability between embodied and vision / language settings\n\n- The main claim of the paper is strong generalization performance, leveraging the internet scale pre-training of video-language models. The bottleneck is the generalization ability of the networks which map embeddings from the video-language model to the world model latent states, and on the quality of the world model itself. I don't see why the aligner and connector should generalize.\n- Given the main claim, I would like stronger baselines / ablations in the generalization and data-free settings. Currently, there are no baselines in the data-free case which makes it impossible to assess how well the method generalizes. \n- Many of the experimental details are unclear in the paper (please see my questions). I encourage the authors to explain these better in the rebuttal and camera-ready, and also provide some intuition for why their method is better than the baselines. \n- In the single task, offline RL case, all the baselines are model-free, whereas the proposed method utilizes a model. I would have liked to see at least one model-based baseline to confirm that the improvement is because of the better reward signal and not because of the model-based optimization.\n- In the single task, offline RL case, reward is computed by looking at the similarity between the representations of the task prompt and the image / video. In the case of the base lines, these representations are fixed (eg. CLIP / Intern2Video representations), whereas for the proposed method these are taken from the last layer of the model learnt on the data itself. This is also reflected in the compute budget - the model takes 5 days to train (in addition to the 5 hours of training in imagination).\n\n- What is the value of $k$ (number of frames predicted by the connector)? What duration does this correspond to? What happens if the task is longer than this duration?\n- Just to confirm, in the offline RL evaluation, first the world model is trained on the offline data (only for that particular domain) and then the policy is trained in imagination? In that case, why is there a difference in the time taken for the actor-critic to converge in the data-free setting (see line 524)\n- In the single task, offline RL case, is the aligner trained with only as many language prompts as the tasks in that domain? If that is the case, it would be trained to reconstruct $e^{(v)}$ corresponding to many different videos in the offline dataset, some of which might contain suboptimal trajectories which have nothing to do with the language prompt. How can we expect the aligner to learn anything useful in this case?\n- If the aligner is trained only on a few language prompts, how is it able to generalize to new tasks?\n- What exactly is the multi-task generalization setting? In this evaluation, does the method get access to offline data from the OOD task? If yes, how is it used to train the policy? If no, how are the model-free baselines trained in this setting?" } ]
zZVqZRXSao
Semantic Feature Learning for Universal Unsupervised Cross-Domain Retrieval
Cross-domain retrieval (CDR) is finding increasingly broad applications across various domains. However, existing efforts have several major limitations, with the most critical being their reliance on accurate supervision. Recent studies thus focus on achieving unsupervised CDR, but they typically assume that the category spaces across domains are identical, an assumption that is often unrealistic in real-world scenarios. This is because only through dedicated and comprehensive analysis can the category composition of a data domain be obtained, which contradicts the premise of unsupervised scenarios. Therefore, in this work, we introduce the problem of **U**niversal **U**nsupervised **C**ross-**D**omain **R**etrieval (U^2CDR) for the first time and design a two-stage semantic feature learning framework to address it. In the first stage, a cross-domain unified prototypical structure is established under the guidance of an instance-prototype-mixed contrastive loss and a semantic-enhanced loss, to counteract category space differences. In the second stage, through a modified adversarial training mechanism, we ensure minimal changes for the established prototypical structure during domain alignment, enabling more accurate nearest-neighbor searching. Extensive experiments across multiple datasets and scenarios, including close-set, partial, and open-set CDR, demonstrate that our approach significantly outperforms existing state-of-the-art CDR methods and other related methods in solving U^2CDR challenges.
https://openreview.net/pdf/cb8530b37653420a6b121610aa8e7adce3e7940a.pdf
[ { "confidence": 4, "rating": 7, "review_id": "hcOW8GR45G", "review_text": "This paper introduces the problem of Universal Unsupervised Cross-Domain Retrieval (U2CDR) and proposes a two-stage semantic feature learning framework to address it. The framework includes a cross-domain unified prototypical structure established through an instance-prototype-mixed contrastive loss and a semantic-enhanced loss in the first stage, and a modified adversarial training mechanism to ensure minimal changes during domain alignment in the second stage. Extensive experiments demonstrate that this approach significantly outperforms existing state-of-the-art CDR methods in solving U2CDR challenges.\n\n1. This paper addresses a new problem, namely Universal Unsupervised Cross-Domain Retrieval, and proposes an initial solution.\n2. The paper first formulates the problem and then introduces the proposed method in a hierarchical manner, which is clear and well-structured.\n3. The ability to perform U2CDR has broad implications for various applications, such as image search, product recommendations, and artistic creation.\n\n1. The main effort of the paper seems to be on designing an optimization method. However, the optimization methods involved appear to be mostly existing ones. The authors should enhance the description of the novelty.\n2. Although the paper uses $L_{SPR}$ to maintain the semantic structure within domains, how to maintain the relationship between the positive pairs across domains should be emphasized.\n3. The analysis related to the Ablation Study seems insufficient. It would be beneficial to analyze the reasons for the experimental results in Table 4.\n\nWhile this paper introduces a new problem, where exactly is the novelty in the methodology section?" }, { "confidence": 3, "rating": 6, "review_id": "IvytY2tyfs", "review_text": "This paper tackles the problem of unsupervised cross-domain retrieval. This is the problem where the query and retrieval domains are distinct. For example, in sketch to real retrieval, the system must retrieve the most relevant real images to a query sketch. \"Unsupervised\" refers to the fact that no labels are available during training, but the images from both domains are available. The authors claim to be the first to investigate the \"universal\" version of this problem, where the query and retrieval domains are allowed to have disjoint labels spaces. For this problem, the authors propose a two-stage optimization procedure. In the first stage, three losses are used: (1) an instance-wise contrastive loss (2) a cluster-wise contrastive loss and (3) a semantic enhanced loss. In the second stage, the embeddings between domains are aligned with three losses: (1) an adversarial domain alignment loss (2) a contrastive loss and (3) a nearest neighbor matching loss.\n\n(1) The method is theoretically motivated.\n\n(2) The paper follows a logical orders.\n\n(3) Experiments appear to be complete.\n\n(1) The method is clearly described and seems to be theoretically motivated. However, it is hard to understand intuitively why each loss is necessary. In particular, why we must use six different versions of the contrastive loss across two stages? (IPM, INCE, PNCE, SEL, SPR, SN2M). The theory only seems to justify the IPM loss. \n\n(2) In my opinion, even for someone well versed in metric learning, this method is hard to grasp. Some examples:\n\n - In line 148, the method applies k-means with a variable number of clusters determined by the \"Elbow approach\" and a contrastive loss on top of the cluster centroids. Just this one paragraph requires the person implementing the algorithm to reference another paper and implement a clustering algorithm.\n\n- The argument, starting at line 152, explaining the IPM loss is hard to understand, mostly because of the unusual notation (arrows and xor symbols).\n\n- The argument for the SN2M loss, starting at line 235 is unclear to me. \n\n(3) Overall, the method reads like a series of steps that do not follow one central motivation.\n\n(1) Why do we need two stages of training? Is it really necessary to have two completely different sets of novel loss functions in each stage?" }, { "confidence": 4, "rating": 5, "review_id": "XvBaornhmb", "review_text": "This paper proposes Universal Unsupervised Cross-Domain Retrieval for the first time and designs a two-stage semantic feature learning framework to address it.\n\nThis paper proposes a new approach in universal unsupervised domain adaptation, with sufficient experiments to verify its motivation.\n\n1. In unified unsupervised domain adaptation, there is no handling of instances that are not common categories. Isn't this necessary?\n\n2. From the perspective of innovation, the proposed unified prototype structure is interesting, and the rest is mostly incremental work, such as semantic structure preservation and adjacent feature matching in domain adaptation. From the visualization results, the author failed to prove the above contributions.\n\n3. This paper should reflect the difference between universal domain adaptation and unsupervised domain adaptation.\n\n4. This article does not have a better way to state the method, especially in cross-domain prototype conversion and close neighbor matching.\n\nSee Weaknesses section" } ]
zXfhHJnMB2
Neural Conditional Probability for Uncertainty Quantification
We introduce Neural Conditional Probability (NCP), an operator-theoretic approach to learning conditional distributions with a focus on statistical inference tasks. NCP can be used to build conditional confidence regions and extract key statistics such as conditional quantiles, mean, and covariance. It offers streamlined learning via a single unconditional training phase, allowing efficient inference without the need for retraining even when conditioning changes. By leveraging the approximation capabilities of neural networks, NCP efficiently handles a wide variety of complex probability distributions. We provide theoretical guarantees that ensure both optimization consistency and statistical accuracy. In experiments, we show that NCP with a 2-hidden-layer network matches or outperforms leading methods. This demonstrates that a a minimalistic architecture with a theoretically grounded loss can achieve competitive results, even in the face of more complex architectures.
https://openreview.net/pdf/8fd1f3f77331bd6b69d3849ec940c067888f32dd.pdf
[ { "confidence": 2, "rating": 7, "review_id": "rlKYRZxN6s", "review_text": "This paper proposes Neural Conditional Probability (NCP), a novel operator-theoretic approach for learning conditional probability distributions. Extensive theoretical results are provided to support the optimization consistency and statistical accuracy of NCP. NCP can be used to extract conditional density and compute statistical measures such as conditional mean, variance, moments and CDF once it is trained. Experiments on a collection of conditional density estimation datasets are conducted to highlight the efficacy of NCP.\n\n- This paper is mathematically solid and well-organized.\n- This paper focuses on a fundamental problem of learning conditional distribution in statistical learning and introduces an effective and simplistic approach that outperforms baselines with more complex architectures.\n\n- The proposed NCP method is not clearly motivated or introduced. In Line 49-50, the authors mention that NCP does not belong to any of the four aforementioned approaches. But how is NCP in contrast with them and in what aspects does NCP make improvements? I believe adding some intuitive explanations accompanying theoretical analysis would help improve the readability.\n- Some key concepts or methods are not clearly explained, which makes it hard to understand the contributions of this work. For example, why is learning *conditional expectation operator* considered useful? Are there any baseline methods that also learn expectation operators?\n\nPlease see Weaknesses." }, { "confidence": 1, "rating": 7, "review_id": "2UDhxFJeKK", "review_text": "I am not qualified to review this paper\n\nI am not qualified to review this paper\n\nI am not qualified to review this paper\n\nI am not qualified to review this paper" }, { "confidence": 3, "rating": 7, "review_id": "pygczyp9lB", "review_text": "The authors propose a method (Neural Conditional Probability, NCP) for learning a conditional distribution P(Y | X) from a finite sample from a distribution. The method is based on following observations: (1) it is sufficient to learn the conditional expectation operator E_{Y | X}[f](x) = E[f(Y) | X = x]; (2) the conditional expectation operator can be written as (an infinite) SVD decomposition which could be truncated at some point, so the problem reduced to learning the finite number of functions in the SVD decomposition; (3) the joint distribution density can be written using the functions from the SVD decomposition of the conditional expectation operator, which gives an optimisation objective for fitting the functions from the SVD decomposition using the sample from a joint distribution. The authors provide an extensive theoretical analysis of the proposed method as well as a simulation study on a few synthetic datasets.\n\n+ An interesting, novel and theoretically well-motivated method addressing an important problem of conditional distribution estimation\n+ The method uses a fairly simple neural network (MLP) but achieves the competitive to the methods using much more complex architectures\n+ Thorough theoretical analysis on statistical properties of the proposed estimator\n\n- Limited experiments restricted to synthetic data making it difficult to judge the potential applicability of this method\n- It would be nice to have a short summary on the main properties of operators, their SVD decompositions, etc. I could generally follow the presentation without major problems, but having a such an operators summary would have made it easier to read the paper\n\n- I am wondering about the choice of the specific loss function in Eq. (6). Could, for example, the log-likelihood potentially be used here? If so, what are the advantages of using the Eq. (6) instead of log-likelihood?\n- The loss function in Eq. (9) is roughly speaking a regularisation term enforcing that the singular functions are orthonormal, is it correct? Could it be possible to build the neural nets with such properties by construction rather than enforce it by regularisation?\n- What do you think about the scalability of the model to more complex datasets than in Section 6? For example, conditional image generation. Do you expect issues applying NCP in such cases?" }, { "confidence": 2, "rating": 6, "review_id": "QNfYm1dlwt", "review_text": "The paper proposes Neural Conditional Probability, a novel operator-theoretic approach to learning conditional probability distributions by learning parameters of the truncated SVD of the conditional expectation operator with a neural network. The authors provide a rigorous mathematical derivation and argue for statistical guarantees of their method. The empirical evaluations require major improvements to an otherwise solid paper.\n\n**As a general note:** I do not consider myself an expert on the theoretical aspects of learning theory. My background is in Bayesian deep learning and simulation-based Bayesian inference. As such, my confidence regarding sections 3 and 5 is rather low, and my review shall mainly consult on the remaining sections that focus on presentation, embedding into other literature, and empirical evaluations.\n\n- The introduction is excellent, with a high degree of accessibility for the broader NeurIPS community and sound motivation of the proposed method.\n- The method seems mathematically rigorous, well-motivated, and sound.\n- The authors compare their method against a high number of competing algorithms in the numerical experiments.\n\n## Major\n- The Related Work section does a good job of acknowledging related works that aim to learn conditional distributions. However, it utterly fails to embed the current paper into this research landscape. I recommend the authors elaborate on the precise similarities and differences between the referenced papers and their methods in the rebuttal period.\n- The empirical evaluations are limited to low-dimensional toy problems. This is a stark contrast to the introduction of the method, where the authors repeatedly list the curse of dimensionality as a drawback of other methods. While I acknowledge that the paper is situated in the area of operator learning and ML theory, the quality standard of NeurIPS is not met by the authors’ experiments. This weak evaluation does not do the remainder of the paper justice and I strongly recommend the authors overhaul the experiments to feature high-dimensional tasks that cannot be solved with other state-of-the-art methods. This constitutes a major revision, and this is the main reason why I cannot recommend acceptance to NeurIPS 2024.\n\n\n## Minor\n\n- The empirical evaluation is missing some important information for real-world applications: What are the approximate wall-clock times for (1) training and (2) inference of the competing methods? Further, the authors mention the large required training set size, which might also influence the practically expected training duration in real-world tasks.\n- Please fix the citations throughout the manuscript: Most citations are ‘text citations’ even if their embedding in the sentence warrants parenthesized citations (Author, 1976).\n- This is just a personal preference, no need to address it: The ‘paper organization’ paragraph at the end of the introduction does not add value and the space could be used more efficiently elsewhere in the manuscript.\n- The first sentence in the conclusion is incomplete.\n\n- As per your answer to checklist item 5 (Open access to data and code), I would like to request access to the full and reproducible code of the empirical evaluations.\n- Since you want to compare your method with state-of-the-art conditional density estimation methods: Why don't you benchmark against conditional flow matching?" } ]
zWuHSIALBh
FLAME : Factuality-Aware Alignment for Large Language Models
Alignment is a procedure to fine-tune pre-trained large language models (LLMs) to follow natural language instructions and serve as helpful AI assistants. We have observed, however, that the conventional alignment process fails to enhance the factual accuracy of LLMs, and often leads to the generation of more false facts (i.e., *hallucination*). In this paper, we study how to make the LLM alignment process more factual, by first identifying factors that lead to hallucination in both alignment steps: supervised fine-tuning (SFT) and reinforcement learning (RL). In particular, we find that training the LLM on new or unfamiliar knowledge can encourage hallucination. This makes SFT less factual as it trains on human-labeled data that may be novel to the LLM. Furthermore, reward functions used in standard RL often inadequately capture factuality and favor longer and more detailed responses, which inadvertently promote hallucination. Based on these observations, we propose *FactuaLity-aware AlignMEnt*, comprised of *factuality-aware SFT* and *factuality-aware RL* through direct preference optimization. Experiments show that our proposed *FLAME* guides LLMs to output more factual responses while maintaining their instruction-following capability.
https://openreview.net/pdf/2f86bf598e6a6290e2144203a3b529f671fa6870.pdf
[ { "confidence": 5, "rating": 6, "review_id": "13rUKK2ck8", "review_text": "This work studies how to do alignment for large language models to improve their factuality. The focus of this work is on SFT and DPO. The motivation behind this work is a pilot study which shows more factual data does not always lead to a more factual model. To resolve this issue, the proposed Flame framework (1) handles fact-based and non-fact-based examples differently; (2) uses few-shot generated examples from the model itself for fact-based SFT; (3) builds a reward model specifically for factuality (via atomic fact decomposition, retrieval augmented claim verification, etc.) Experiments on multiple datasets demonstrate that Flame can improve the model's factuality without hurting other capabilities (e.g., instruction following). Ablations are also conducted to measure the gain from each individual step.\n\n1. The motivation is clear and reasonable. I like using a simple and quick pilot experiment to demonstrate the main motivation of this paper.\n\n2. The idea is straightforward and effective. The high level framework can applied to many different systems. \n\n3. Ablation experiments are conducted to show the gain from each step. The effectiveness for both SFT and DPO are clear.\n\n1. No external baselines are used in the comparison. It would be great to compare the flame model with other related approaches (e.g., few-shot prompting, sampling multiple responses, and reranking using FactScore or the reward model). I know these approaches are not directly comparable, however, it will still be valuable to understand the relative trends, especially since approaches such as few-shot prompting are used in data generation.\n\n2. It will be great to conduct human evaluations even just on a few examples.\n\n3. The whole pipeline involves a number of components. While many details are presented in the appendix, low-level details like few-shot prompts, and implementation of fact decomposition are omitted. Adding these details will be super valuable for future work to build similar systems. It would be even better if the authors decide to release the code.\n\n1. In the pilot experiment, doing DPO with FS seems to work reasonably well. Have you tried similar approaches in the real experiments?" }, { "confidence": 4, "rating": 5, "review_id": "6djJMr1xdC", "review_text": "This paper shows that training on new or unfamiliar knowledge can promote hallucination and that reward functions in standard RL often inadequately capture factuality. The authors propose a factuality-aware alignment method that first identifies instructions as fact-based or non-fact-based. For fact-based instructions, they employ adapted techniques in respective SFT and RL to generate additional training data, thereby reducing the hallucination of the model's responses.\n\n* The paper conducts a pilot study that highlights the limitations of SFT and RL in capturing factual knowledge. This study provides valuable insights into data selection for LLM alignment training.\n* The proposed dual-stage factuality-aware method improves factuality without compromising the instruction-following capabilities for both SFT and RL stages.\n\n* The proposed strategy to create SFT and DPO training data using the generated responses from the LLM itself is limited to the knowledge learned within the original model. This approach may struggle with instructions that the original model cannot generate factual answers for.\n* The proposed strategy relies on accurately identifying the instruction type initially, which is limited by the model's ability to correctly classify the instruction type. \n* In the pilot study, it is unclear whether the $PT$ and $PT^{RAG}$ are evaluated using the same protocol as other methods. If they are, the FS score decreases after both SFT and DPO, which contradicts the claim that \"fine-tuning LLMs on their own generations appears to be crucial for factual alignment.\"\n* While the results in Table 2 and 3 indicate that eliciting knowledge from the model itself can enhance factuality compared to introducing more factual but unknown knowledge, it does not improve the FS of the $PT$, which achieves a score of 53.1 on the Biography task with just 5-shot demonstrations.\n* As discussed in Sec 5.5, conducting fact checks and computing factuality rewards solely for fact-based sentences can lead to more factuality errors. Clarification is needed on how FS is calculated for the experiments in Sec 5.2 and 5.3.\n\nPlease refer to the Weaknesses." }, { "confidence": 4, "rating": 4, "review_id": "P5eXLabh8c", "review_text": "This paper addresses the issue of factual inaccuracy, or \"hallucination,\" in Large Language Models (LLMs). The authors identify factors that lead to the generation of false facts during supervised fine-tuning (SFT) and reinforcement learning (RL). They propose FLAME, a novel alignment method that incorporates factuality-aware SFT and direct preference optimization (DPO) to guide LLMs towards more factual responses without compromising their ability to follow instructions. Experiments demonstrate FLAME's effectiveness in enhancing factuality while maintaining helpfulness.\n\n1. The ablation experiments provides comprehensive insights into the effectiveness of DPO and SFT in mitigating hallucination.\n2. The method proposed in this paper attempts to balance instruction following and factuality. It relies on model self-construction data, and does not depend on external proprietary models.\n\n1. The baselines compared in this work are limited to different settings of SFT and DPO only. The baselines in the paper should at least include the work [1]. This prior work also uses DPO and algorithms, and the only difference seems to be data construction. The paper should compare with this work to demonstrate that its algorithm truly achieves a balance between instruction following and factuality.\n2. In addition to the works listed in the related work, there are some works whose methods are somewhat similar to this paper, such as [2] [3], etc. The paper may need to add explanations of the differences between these methods to clarify its own novelty.\n\n[1] Fine-tuning Language Models for Factuality. https://arxiv.org/abs/2311.08401\n\n[2] Self-Alignment for Factuality: Mitigating Hallucinations in LLMs via Self-Evaluation. https://arxiv.org/abs/2402.09267\n\n[3] GRATH: Gradual Self-Truthifying for Large Language Models. https://arxiv.org/pdf/2401.12292\n\n1. In comparison to training, have the authors considered comparing representation editing baseline methods?\n2. Could the authors supplement experiments on the TruthfulQA-MC in Table 4 to provide a measure of multi-choice performance?" }, { "confidence": 3, "rating": 6, "review_id": "HOaDV5Ipe8", "review_text": "The paper discusses a novel alignment method to enhance the factual accuracy of LLMs. The authors observe that conventional alignment processes, which include SFT and RL, often result in the generation of false facts or 'hallucinations'. To address this, they introduce factuality-aware alignment (FLAME), which includes factuality-aware SFT and RL through direct preference optimization. FLAME identifies factors leading to hallucination and adapts the training process to reduce the generation of false claims. Experiments demonstrate that FLAME guides LLMs to produce more factual responses without compromising their ability to follow instructions. The paper contributes to the field by tackling the issue of maintaining helpfulness while improving the factuality of AI-generated content.\n\n- Clear and Logical Structure: This paper is well-organized and presents its findings with a logical flow, making it easy to follow.\n- In-depth Analysis of Hallucination: The paper thoroughly analyzes the factors contributing to hallucination during the SFT and RL phases of language model alignment. It identifies key issues: training on unfamiliar data can reduce factual accuracy, and standard RL reward functions often prioritize longer, more detailed responses, potentially encouraging the model to fabricate information.\n- Innovative Solution: The proposed FLAME is a novel alignment approach that effectively addresses hallucination without compromising the model's ability to follow instructions. By extending both SFT and RL, FLAME tackles a critical issue in LLMs, ensuring more accurate and reliable information generation.\n- Comprehensive Evaluation: The paper thoroughly evaluates FLAME's effectiveness in improving both factuality and instruction-following abilities. Experiments demonstrate that models aligned using FLAME achieve significantly higher FactScore compared to standard alignment methods, without sacrificing their helpfulness.\n\nThis paper is well-written and makes a valuable contribution to the LLM alignments. I only have several minor concerns as follows:\n- Model Size and Generalizability: The paper focuses solely on the LLaMA2-70B model. It would be beneficial to investigate whether FLAME's effectiveness extends to smaller models, such as 7B or even smaller, given that the factuality-aware SFT relies on self-supervision through few-shot prompting. \n- Evaluation Metrics and Human Assessment: While FactScore is a valuable metric, it has limitations. It assumes Wikipedia as the definitive source of truth and may not be suitable for broader domains. Using a more comprehensive metric like Veriscore [1] could provide a more nuanced evaluation (I understand that Veriscore is a recently released method, so this is a suggestion for the future version of this paper). Additionally, incorporating human evaluation would strengthen the analysis. A manual assessment of factuality and helpfulness would provide valuable insights and increase the persuasiveness of the findings.\n- Multi-faceted Evaluation: The paper primarily focuses on instruction following and factuality. However, other crucial aspects of LLM capabilities, including knowledge, reasoning, and code generation, should also be considered. It would be insightful to evaluate the performance of FLAME-trained models on standard benchmarks like MMLU, GSM8K, and HumanEval to assess potential trade-offs in these areas.\n\n- While FLAME primarily focuses on DPO, can it also be applied to conventional reinforcement learning from human feedback (RLHF) methods like PPO?\n- Are there plans to release the code and models trained using FLAME for the research community to replicate your methods?" } ]
zWnW4zqkuM
InstructG2I: Synthesizing Images from Multimodal Attributed Graphs
In this paper, we approach an overlooked yet critical task Graph2Image: generating images from multimodal attributed graphs (MMAGs). This task poses significant challenges due to the explosion in graph size, dependencies among graph entities, and the need for controllability in graph conditions. To address these challenges, we propose a graph context-conditioned diffusion model called InstructG2I. InstructG2I first exploits the graph structure and multimodal information to conduct informative neighbor sampling by combining personalized page rank and re-ranking based on vision-language features. Then, a graph QFormer encoder adaptively encodes the graph nodes into an auxiliary set of graph prompts to guide the denoising process of diffusion. Finally, we propose graph classifier-free guidance, enabling controllable generation by varying the strength of graph guidance and multiple connected edges to a node. Extensive experiments conducted on three datasets from different domains demonstrate the effectiveness and controllability of our approach. The code is available at https://github.com/PeterGriffinJin/InstructG2I.
https://openreview.net/pdf/14232e04d8524d77648d3c5ea135527ad4aef01a.pdf
[ { "confidence": 3, "rating": 5, "review_id": "znQXTCns5r", "review_text": "The authors propose an approach to enhance image synthesis using multimodal attributed graphs, adopting a strategy to condition image generation via a tokenization scheme on graph structure.\n\n- The paper studies an intersectional topic: leveraging graph learning techniques for image generation, which is a creative application and an area which deserves more focus.\n- The authors' use of qualitative examples (e.g. Figure 5 and 6) is commendable and helps articulate visual improvements.\n\nPlease see questions and concerns below. My general feeling is the paper is fairly incremental in its introduction of a mechanism to encode graph condition into the conditioning for generation. Many design choices for graph conditioning are not discussed well and the quantitative results for some of these choices are missing which hurts the overall impact of the work.\n\n- Typos: \n - Line 18: \"graph-structued\"\n\n- The motivation proposed in lines 28-30 is a little bit confusing, since the scenario the authors discuss here (e.g. virtual artwork creation based on nuanced styles of artists and genres) seems like it could be well-handled by text rather than explicitly using graph structure. \n\n- There is limited prior work in multimodal graph learning, as the authors mention. The authors may want to reference and position their work with respect to the recent [1] which offers multiple datasets and focuses on utility of GNN methods for node/link/graph-level tasks rather than generative tasks.\n\n- Nit: the notation is a bit awkward compared to conventional graph literature which typically uses $\\mathcal{V}, \\mathcal{E}, \\mathcal{X}, \\mathcal{F}$ or something similar to indicate node-set, edge-set, node-features, and edge-features. The authors proposed notation in line 72 seems to define P and D as different sets of images / documents compared to the nodes V, but then mentions that each node has some textual information and image information corresponding to P and D (it should be made clear whether this information is just features, or actual node relationships -- if the latter, it seems that P and D should be contained within the nodeset V). \n\n- The process described in line 115 around introducing tokens to help condition the representation using graph structure is also explored in some related works, e.g. [2]. Perhaps the authors could consider adopting a similar approach if it makes sense in this task, since the tokenization scheme as the authors of [2] point out is key in injecting the right level of information to the model.\n\n- Comment: the notation in pages 3-5 is quite heavy and would benefit from a symbol table.\n\n- Section 3.2 proposes a heuristic solution for neighbor selection. I'd encourage exploring solutions designed for learnable importance of multiple edge types similar to [3].\n\n- Can the authors discuss what sort of techniques were used to incorporate graph conditions for the baseline models like InstructPix2Pix and SD?\n\n- Is there a quantitative understanding or experiment for the PPR neighbor based sampling approach? It seems this is one of the more heuristic parts of the paper where the design of the sampling procedure (two phase PPR + semantic re-ranking) is less conventional and deviates from other aggregation mechanisms explored in previous literature like attention-based selection, learnable importance of multiple edge types, etc. The qualitative experiment is helpful but not terribly convincing in terms of the actual performance impact in aggregate.\n\n[1] Multimodal Graph Benchmark (Zhu et al, 2024)\n\n[2] LLaGA: Large Language and Graph Assistant (Chen et al, ICML 2024)\n\n[3] Pathfinder Discovery Networks for Neural Message Passing (Rozemberczki et al, WWW 2021)" }, { "confidence": 3, "rating": 6, "review_id": "h1c134H6gx", "review_text": "This paper focuses on the problem of image synthesis on multimodal attributed graphs (MMAGs) and proposes a graph context-conditioned diffusion model, INSTRUCTG2I, to address the challenge in this setting. In particular, it proposes a semantic personalized PageRank-based method to sample related neighbors in the graph. Then, the INSTRUCTG2I can effectively encode graph conditional information as graph prompts with Graph-QFormer. Systematic experiments on MMAGs demonstrate the effectiveness of the methods proposed in this paper compared to competitive baseline methods.\n\n1. This paper studies an interesting and meaningful question. It investigates the graph-structured relationships of real-world entities for image generation on MMAGs, a task well-grounded in practical applications. \n2. This paper is well-structured and easy to understand.\n3. The graph context-conditioned diffusion model proposed in this paper is reasonable in solving image generation problems on MMAGs.\n\n1. The description in eq.10 may be incorrect. Please check more carefully.\n2. Subsection 3.4 is more challenging to understand when reading. The authors' descriptions of some symbols in Eq. 10 and Eq. 11 are not exhaustive.\n3. The results of the ablation experiments in Table 2 indicate that using a GNN such as GAT or GraphSAGE to aggregate graph information seems to be worse than the straightforward approach in Eq.7. Authors are requested to give a more detailed discussion with a reasonable explanation.\n4. The images sampled by the semantic PPR-based sampling shown in Figure 5 appear to have the same image as the ground truth. Does this indicate that the proposed method suffers from label leakage?\n\n1. Please see the weaknesses.\n2. I wonder if the authors will compare it to other state-of-the-art image generation models, such as some Multimodal LLMs that are so prevalent nowadays." }, { "confidence": 3, "rating": 7, "review_id": "ca65UCeO7O", "review_text": "The paper introduces a new task graph2image which is to generate images conditioned on both text descriptions and graph information, which improves consistency of generated images compared to conditioned only on texts or images. To address combinatorial complexity of graphs and dependencies among graph entities, the paper proposes a graph context-conditioned diffusion model InstructG2I for generating images from multimodal attributed graph.\n\n- To the best of my knowledge, graph2image is a novel task, and the motivation to use the rich and high-dimensional information of graphs for image generation seems reasonable and interesting. \n- The proposed approach to incorporate graph information into pre-trained text-to-image is new, in particular introducing graph conditioning token and considering scalability of graph size. \n- The generated samples show that using graph information results in better consistency with the ground truth compared to methods that use only text prompts or images.\n- Examples of controllable generation with both text and graph show the ability to balance content and style in a simple manner.\n\nWhile I do not have a major concern, an ablation study on scalability to graph size seems to be missing. How large graphs is the method able to be applied?\n\n- Why is the DINOv2 score on Goodreads dataset significantly low compared to that of ART500K or Amazon datsets?" }, { "confidence": 3, "rating": 5, "review_id": "FlRUyG2fRS", "review_text": "This paper introduces a novel approach for controllable image generation using both graph and text conditions. The authors propose that additional context information from multimodal attributed graphs (MMAGs) can enhance the performance of diffusion models. Specifically, they formulate the Graph2Image problem and develop the INSTRUCTG2I model to incorporate contextual information during the generation process. Empirical evaluations demonstrate the strong performance of the model.\n\n1. The paper is easy to follow.\n2. The intuition behind the approach is clear.\n\n1. The overall setting is questionable. The authors integrate graph information using a Graph-QFormer and context information such as artists and genres, stored in graph prompt tokens. Given the large graph size, they only use subgraph structures. Consequently, the Stable Diffusion (SD) model absorbs additional information from similar artworks, which could be derived from image or text prompts alone. This raises the question of whether an additional condition structure is necessary. I suggest the authors demonstrate a unique application where standard models with text and image prompting capabilities are insufficient.\n\n1. Are there any unique scenarios where only graph input can significantly improve SD performance?\n\nAs my review is overdue, I welcome concise feedback and am open to clarifying any potential misunderstandings." } ]
zVrQeoPIoQ
Rethinking No-reference Image Exposure Assessment from Holism to Pixel: Models, Datasets and Benchmarks
The past decade has witnessed an increasing demand for enhancing image quality through exposure, and as a crucial prerequisite in this endeavor, Image Exposure Assessment (IEA) is now being accorded serious attention. However, IEA encounters two persistent challenges that remain unresolved over the long term: the accuracy and generalizability of No-reference IEA are inadequate for practical applications; the scope of IEA is confined to qualitative and quantitative analysis of the entire image or subimage, such as providing only a score to evaluate the exposure level, thereby lacking intuitive and precise fine-grained evaluation for complex exposure conditions. The objective of this paper is to address the persistent bottleneck challenges from three perspectives: model, dataset, and benchmark. 1) Model-level: we propose a Pixel-level IEA Network (P-IEANet) that utilizes Haar discrete wavelet transform (DWT) to analyze, decompose, and assess exposure from both lightness and structural perspectives, capable of generating pixel-level assessment results under no-reference scenarios. 2) Dataset-level: we elaborately build an exposure-oriented dataset, IEA40K, containing 40K images, covering 17 typical lighting scenarios, 27 devices, and 50+ scenes, with each image densely annotated by more than 10 experts with pixel-level labels. 3) Benchmark-level: we develop a comprehensive benchmark of 19 methods based on IEA40K. Our P-IEANet not only achieves state-of-the-art (SOTA) performance on all metrics but also seamlessly integrates with existing exposure correction and lighting enhancement methods. To our knowledge, this is the first work that explicitly emphasizes assessing complex image exposure problems at a pixel level, providing a significant boost to the IEA and exposure-related community. The code and dataset are available in \href{https://github.com/mRobotit/Pixel-level-No-reference-Image-Exposure-Assessment}{\textcolor{red} {here}}.
https://openreview.net/pdf/9d159e1b2a972b461ac4b69cc1ad301734642641.pdf
[ { "confidence": 5, "rating": 5, "review_id": "BtL8B1Nrw3", "review_text": "The paper introduces a novel paradigm that extends Image Exposure Assessment (IEA) from an image-level to a pixel-level framework. This paradigm comprises three components: model, dataset, and benchmark. Concerning the model, the study introduces the Pixel-level IEA Network (P-IEANet). This network processes images of varying exposures, separates them into low and high-frequency components via a discrete wavelet transform, assesses brightness with the low-frequency component, and evaluates structure with the high-frequency component, ultimately delivering pixel-level assessment results. Regarding the dataset, the authors have developed a new dataset, IEA40K, which includes 40,000 images featuring diverse exposures and corresponding pixel-level annotations. Finally, the paper presents comprehensive experiments on both holistic and pixel-level assessments, yielding promising results.\n\n1. The paper initially proposes a pixel-level image exposure assessment paradigm, significantly enhancing precision in the field of image exposure assessment.\n\n2. The paper introduces an assessment network that employs discrete wavelet transform, an intriguing choice supported by several ablation studies.\n\n3. The paper proposes a large-scale, multi-exposure dataset with pixel-wise annotations derived from an automatic multi-exposure fusion technique, subsequently refined by human experts.\n\n4. The paper also demonstrates that the P-IEANet can potentially improve the performance of low-light image enhancement methods.\n\n5. The paper is well-composed, demonstrating a clear structure, precise language, and a logical flow of ideas.\n\nThe main weakness is that the paper lacks a well-defined definition for pixel-level image exposure assessment. For other details, please refer to the \"Questions\" part.\n\nI find the proposed task interesting, while I have reservations about certain assertions made in the paper, terms that lack clarity, and the absence of adequate justification for the introduction of certain tasks without clear motivation. For further details, see the below list. I've organized my concerns and suggestions according to their significance to assist the authors in prioritizing their rebuttal.\n\n1. The terminology employed in the paper suffers from a lack of clarity, necessitating more detailed explanations. For instance, the term ``exposure`` conventionally refers to the duration of exposure time in the context of capturing images with digital cameras, typically considered as a global attribute of an image. However, the paper introduces the concept of ``pixel-level exposure`` without providing a sufficient explanation, which is illogical in the literal sense. Similarly, the term ``exposure residual`` is introduced but remains poorly defined, further complicating the understanding of the methodology. Probably, the paper misuses the concepts of ``exposure`` and ``brightness``, which cannot be used interchangeably, however.\n\n2. The motivation behind the paper remains ambiguous. It argues that a holistic evaluation of image exposure encounters two primary issues: (1) a dilemma between applicability and practicability, and (2) a narrow inductive bias. However, the paper lacks a further explanation of these problems. Incorporating visual results from current holistic evaluation methods that exhibit these issues could more effectively and intuitively demonstrate the paper's motivation. In the current version, the necessity for a pixel-level image exposure assessment method is not clearly articulated, particularly under which circumstances such a technique would be essential.\n\n3. Related to the first point, the proposed method targets to predict ``exposure residual``, defined as ``(reference - input)`` in RGB space. However, the rationale behind this definition requires further justification. Specifically, it remains unclear why this definition is suitable for use as the ground truth in pixel-wise image exposure assessment. Additionally, it is essential to explore whether any disparity exists between the concepts of ``(reference - input)`` in RGB space and the actual pixel-wise score for image exposure.\n\n4. For evaluation metrics, PSNR and SSIM are two commonly used pixel-wise metrics. However, this paper only adopts SSIM for evaluation. Including PSNR performances would provide a more convincing argument.\n\n5. While comparing pixel-level performance with other image enhancement methods, the paper derives these methods' ``exposure residual`` predictions by directly predicting the residual map. However, image enhancement techniques typically use loss functions designed to smooth the final outputs and align them with human perception, which may not be appropriate for predicting residuals. Although it is acknowledged that the difference between ``(output-input)`` and the proposed ``exposure residual`` exists, incorporating an additional ablation study that calculates the residual from ``(output-input)`` would likely provide a more comprehensive analysis.\n\n6. Important details, such as the architecture of the proposed Long Range Encoder (LRE) and Short Range Encoder (SRE), are missing, hindering the reproductivity of the proposed framework.\n\n7. The availability of the proposed dataset to the public is crucial for assessing the contribution of this work." }, { "confidence": 5, "rating": 7, "review_id": "lv268bJBr4", "review_text": "This work tackles the challenges in image exposure assessment from three aspects: models, datasets, and benchmarks. Specifically, A P-IEANet model based on DWT is proposed, which can generate pixel-level assessment results in a no-reference manner. An exposure-oriented dataset IEA40K is collected to cover various lighting scenarios, devices, and scenes, which are annotated by more than 10 experts with pixel-level labels. A comprehensive benchmark of 19 methods is conducted on the collected IEA40K dataset, where the proposed P-IEANet delivers the best performance.\n\n+ Decomposing images into lightness features and structure components using Haar DWT is theoretically reasonable and empirically effective as presented in this work.\n+ The dataset construction strategies described in Sec. 4.1 and Sec. 4.2 provide valuable insights to the related community.\n+ The proposed model delivers good performance, even outperforming the LMM-based model Q-align.\n\n- Holistic level assessment is performed on SPAQ. It should be straightforward to convert the pixel-level annotations to holistic level annotations in the proposed IEA40K dataset because the pixel-level annotations contain more information than the holistic level annotations.\n- Would the performance of IEA models be boosted by jointly training (like the practices used in UNIQUE, LIQE, etc.) the model on the combination of IEA dataset and general-purpose IQA datasets?\n\nInstead of SSIM and PSNR, I think Eq. (7) can also be employed as a pixel-level performance measure." }, { "confidence": 4, "rating": 5, "review_id": "1LJbys9qQZ", "review_text": "This paper proposes a new no-reference image exposure assessment method, Pixel-level IEA Network (P-IEANet), which analyzes and evaluates image exposure from the perspectives of brightness and structure using discrete wavelet transform (Haar DWT). Also, a dataset exclusively tailored for IEA, called IEA40K, is constructed. According to a comprehensive evaluation of methods on the IEA40K dataset, the proposed method achieves SOTA performance and offers advantages for the exposure enhancement community.\n\nThis paper demonstrates very good originality as it is the first realization of pixel-level image exposure assessment. The authors have designed corresponding methods specifically addressing the characteristics of this problem and achieved satisfying results. Detailed explanations of the motivation and the current state of research are provided. Both the principles and the implementation of the method are clearly presented. The experimental results effectively demonstrate the performance of the proposed method. This paper not only proposes a new IEA method but also contributes a new dataset and benchmark, providing a significant boost to the IEA and exposure-related community.\n\nHaar DWT is used to decompose an image into components with different frequencies, but the advantages of this method compared to other similar methods are not adequately explained. In the method section of this paper, some operations lack clear motivation or principles. For example, the reason for applying the DWT^{-1} and the choice of l1 norm as the loss function are not well explained. In the experiments section, SSIM and MAE are adopted to measure the structure and lightness similarity between the ground truth and predicted exposure residual. However, as a perceptual IQA metric, SSIM may not be suitable for evaluating the prediction accuracy of exposure residuals. The paper claims that the proposed method has improved adaptability across varying criteria and scenarios, but this is not well demonstrated in the experiments.\n\n1. Why use Haar DWT to decompose an image into components with different frequencies, and what are its advantages compared to other similar methods, such as other types of DWT? \n2. Why is the DWT^{-1} step necessary?\n3. What is the reason for using the l1 norm as the loss function?\n4. Why choose MAE to measure the structure and lightness similarity between the ground truth and predicted exposure residual, instead of using MSE?" }, { "confidence": 4, "rating": 7, "review_id": "UDa2TKnB8T", "review_text": "This paper proposes an innovative no-reference image exposure assessment method, transitioning from traditional holistic image evaluation to fine-grained pixel-level assessment. This approach effectively addresses the shortcomings of existing techniques in terms of accuracy and generalization. Researchers have developed P-IEANet, a pixel-level evaluation network that utilizes Haar discrete wavelet transform to analyze image brightness and structural information, enabling exposure assessment without reference images. Additionally, to support this method, the researchers have constructed the IEA40K dataset, which contains 40,000 images with detailed pixel-level annotations, covering diverse lighting conditions and devices. Using this dataset, they established a comprehensive benchmark including 19 methods, demonstrating that P-IEANet achieves state-of-the-art performance across multiple evaluation metrics. This work not only enhances the accuracy of no-reference IEA tasks but also provides valuable resources and new research directions for the image exposure research community. Future work will focus on optimizing the framework to support multimodal outputs and enhancing exposure perception in AI-generated content.\n\n- Pixel-level Evaluation: The P-IEANet proposed in the article is capable of conducting pixel-level image exposure assessment, which offers a more refined analysis and more accurate results compared to traditional overall image assessment.\n- Innovative Model Architecture: By integrating the Haar Discrete Wavelet Transform with specific feature extraction modules, P-IEANet is able to analyze images from both the brightness and structural perspectives, providing a more comprehensive exposure assessment.\n- Large-scale Dataset: The article has constructed the IEA40K dataset, which is a large-scale, diverse image dataset that provides rich resources for evaluation and training.\n\n- The author mentions in the abstract that the code and dataset can be found in the supplementary materials, but there is no relevant section in the supplementary materials.\n- There is no explanation as to why the Haar wavelet was chosen over other wavelets.\n- The aesthetic quality of Figure 4 needs to be improved.\n\nPlease refer to the comments in the weakness part." } ]
zV2GDsZb5a
Neural Gaffer: Relighting Any Object via Diffusion
Single-image relighting is a challenging task that involves reasoning about the complex interplay between geometry, materials, and lighting. Many prior methods either support only specific categories of images, such as portraits, or require special capture conditions, like using a flashlight. Alternatively, some methods explicitly decompose a scene into intrinsic components, such as normals and BRDFs, which can be inaccurate or under-expressive. In this work, we propose a novel end-to-end 2D relighting diffusion model, called Neural Gaffer, that takes a single image of any object and can synthesize an accurate, high-quality relit image under any novel environmental lighting condition, simply by conditioning an image generator on a target environment map, without an explicit scene decomposition. Our method builds on a pre-trained diffusion model, and fine-tunes it on a synthetic relighting dataset, revealing and harnessing the inherent understanding of lighting present in the diffusion model. We evaluate our model on both synthetic and in-the-wild Internet imagery and demonstrate its advantages in terms of generalization and accuracy. Moreover, by combining with other generative methods, our model enables many downstream 2D tasks, such as text-based relighting and object insertion. Our model can also operate as a strong relighting prior for 3D tasks, such as relighting a radiance field.
https://openreview.net/pdf/71a526cee2abe808ec8027770fd2ee1ce6e3a7fc.pdf
[ { "confidence": 4, "rating": 4, "review_id": "CpBjI95Arn", "review_text": "This paper presents a method for relighting objects observed from a single image. While existing approaches rely on specific capture condition using flashlight illumination or portrait captures, or require to explicitly decompose the scene into geometry and reflectance, the proposed method aims to generate images of a given objects under novel illumination conditions for arbitrary environmental lighting conditions. The authors show that this is possible by relying on a generative diffusion method that is conditioned on the environmental map. The method relies on a pre-trained diffusion model that is fine-tuned on a synthetic relighting dataset to learn the conditioning. The approach is evaluated qualitatively and quantitatively on single-object images. Relying on a conditional diffusion model for relighting, the authors also show additional conditioning on text for relighting.\n\nThis work presents a simple (this is a good thing) and effective method for relighting from a single image. The method relies on synthetic supervision with a novel Blender-rendered dataset that uses Objeverse as input model source. The authors went a long way by collecting diverse HDR environment maps from the Internet that were augmented to produce a large synthetic relighting dataset of almost 20M rendered images with ground truth lighting maps. Overall, the method offers a number of intriguing benefits listed as follows:\n\n* Conditional image-to-image diffusion model: The method inherits a conditional Zero-1-to-3 model that is extended in its input latents to a rotated environment map with the camera coordinate frame, allowing for image-to-image relighting in a consistent frame. While, given enough training data, the method is effective in relighting, the approach also enjoys the benefits of existing diffusion architectures with various types of conditioning. The authors demonstrate this effectively with their image conditioning. \n\n* Relighting 3D radiance fields: The proposed method is evaluated as a prior for 3D relighting of a neural radiance field. Specifically, the authors propose to use diffusion-based relighting as a coarse reconstruction loss (predicting a coarse relit scene during the NeRF optimization) and a detail refinement loss where the NeRF appearance is further refined.\n\n* Qualitative evaluation: The evaluations presented qualitatively in the main manuscript and the supplemental material in the form of supplemental videos are visually plausible and convincing. \n\n* Quantitative evaluations: The method is adequately ablated and quantitatively compared to single image relighting methods, 3D radiance field relighting with reasonable margins on the test sets. This validates the method as an effective approach.\n\nWhat makes the method exciting, at first glance, is also one of the major weaknesses: the technical novelty. The paper piggy-backs on an existing generative method, the Zero-1-to-3 model, that is with a few variations used for relighting. While the simplicity is something that is desired, it also makes it challenging for the reader to derive deeper insights from this work. We learn that pre-trained diffusion-models, when just given enough and the right synthetic data, can allow for plausible novel view synthesis with artefacts that are improved over existing methods. However, the recent work by \n\nChong Zeng, Yue Dong, Pieter Peers, Youkang Kong, Hongzhi Wu, and Xin Tong. Dilightnet: Fine-grained lighting control for diffusion-based image generation, 2024.\n\nin a way also does show the exactly same, although the technical approach is different. Overall, the technical contribution of the approach is rather incremental (although the method is effective). As such, I am torn on this work. While the technical contribution is not near other work at NeurIPS, the method is effective and likely of high impact. \n\nA further qualm I have is regarding the results compared to NVDIFFREC. While the margins are not substantially different, the results in Fig. 6 seem to indicate differently. It seems as if these results are cherry-picked.\n\nSee questions regarding trends in the quantitative evaluations and the qualitative results that do not seem to match." }, { "confidence": 3, "rating": 7, "review_id": "TQYERDblIs", "review_text": "The paper introduces Neural Gaffer, an end-to-end 2D relighting diffusion model designed for single-image relighting without the need for explicit scene decomposition. Neural Gaffer can synthesize high-quality relit images of any object under novel environmental lighting conditions by conditioning on a target environment map. The model builds on a pre-trained diffusion model, fine-tuning it on a synthetic relighting dataset. The advantages in generalization and accuracy through evaluations on both synthetic and in-the-wild Internet imagery are shown in the paper. Neural Gaffer can be combined with other generative methods for various downstream 2D tasks like objection insertion. The video results presented in the paper are of high quality.\n\n1) Neural Gaffer performs single-image relighting without the need for explicit scene decomposition into intrinsic components like normals and BRDFs. This provides an avenue for relighting without collecting expensive relighting real-world datasets.\n\n2) The model can generate relit images of various objects under different environmental lighting conditions based on a target environment map. The method takes a single image as an input.\n\n3) The method can be applied to real-world objects with high-quality relighting results and perform various downstream tasks such as object insertion.\n\n1) In case of the real-world object scenarios, the object may not be always centred and may have complex backgrounds and lighting to start with. The paper does not demonstrate how would the method behave in such cases. How about the objects with high-frequency texture details?\n\n2) Related to 1) there might be multiple objects in a scene. From the results, it seems that the method cannot handle multiple objects from a single image.\n\n3) The real-world object examples shown in the paper and the video are good but not impressive. It would be more compelling to show faces, humans, animals etc under the lighting conditions to show the generalizability of the method.\n\nThe paper does not demonstrate how the method behaves when the target object is not centred and has complex backgrounds or varied lighting conditions. How does the method perform in such scenarios, especially with objects that have high-frequency texture details?\n\nIt appears that the method may struggle with scenes containing multiple objects. Can the authors provide further evaluation or examples to show how the method handles multiple objects in a single image?\n\nWhile the real-world object examples are good, they are not particularly impressive. Can the authors provide more compelling examples involving faces, humans, or animals under varied lighting conditions to better demonstrate the generalizability of the method? While it's understood that portrait lighting might not be comparable to those methods specifically trained on portraits, it would be good to see the generalizability of the method." }, { "confidence": 4, "rating": 5, "review_id": "kos6Enh35V", "review_text": "Neural Gaffer presents an approach to object-centric image relighting using diffusion models. The method adapts a pre-trained diffusion model and fine-tunes it on a synthetic dataset designed for relighting tasks. The main feature is its ability to condition the diffusion process on target environment maps, allowing for control over lighting effects.\n\n1) Simple yet effective approach: The paper presents a straightforward fine-tuning method for object relighting, similar to zero-1-2-3 shot learning. This simplicity is a strength, demonstrating that complex relighting can be achieved without overly complicated techniques.\n\n2) Powerful data-driven learning: The supervised conditional diffusion model effectively learns to relight objects, highlighting the potential of data-driven approaches in capturing intricate lighting interactions.\n\n3) Competitive results: Based on the presented figures, the method appears to outperform the recent DiLightNet in some aspects. However, this comparison raises some evaluation questions (see questions section for details).\n\n1) Real-world evaluation: The model is fine-tuned on a synthetic relighting dataset, which might not fully capture the complexity of real-world lighting scenarios. Real-world evaluation is necessary, and there are datasets capturing these effects. The paper is currently missing this evaluation, and there are datasets available for such evaluation [1] OpenIllumination [2] Objects with Lighting or [3] Stanford ORB dataset. These papers have been cited but it is surprising to not see an evaluation of these datasets.\n\n2) Reliance of Environment map: Do you need to supply the environment map for relighting? There is a missing baseline that shows what happens if you condition the target lighting image without a full environment map (only image crops). The Diffusion Light Probe (CVPR 2024) paper indicates that diffusion models are capable of inpainting reliable environment maps and they seem to be implicitly encoded within the model. This baseline will justify why a full environment map is required or necessary for this task.\n\n3) Generalization to scenes: The extent to which the method generalizes to scenes -- not just objects -- is unclear. Evaluating the MIT-multi illumination dataset could shed light on this. The current reliance on explicit environment maps makes it harder to perform on these scenes, but it would be interesting to see if, without explicit environment maps (like suggested above), can you learn to relight and compare on scenes.\n\n4) Evaluation metrics: Recent studies show that PSNR, SSIM, etc. are not consistent with human evaluation. See \"Towards a Perceptual Evaluation Framework for Lighting Estimation\" (CVPR 2024). These metrics don't tell us much about whether the method is promising as such. A thorough evaluation via user studies or the metrics as defined in the recent paper is currently missing from the paper.\n\n5) Unrealistic results and missing comparisons: The object insertion results look unrealistic, with incorrect shadows that don't match the lighting conditions. Several relevant lighting-aware compositing methods are missing from the comparisons, such as ControlCom [Zhang et al., arXiv 2023], Intrinsic Harmonization [Carega et al., SIGGRAPH 2023], Reshading [Bhattad and Forsyth, 3DV 2022], and ARShadowGAN [CVPR 2020]. The comparison to AnyDoor doesn't make sense as it's not lighting-aware. Including these comparisons would provide a better evaluation of the method's performance against current state-of-the-art techniques.\n\n6) Further, as the papers use off-the-shelf methods to estimate environmental maps (text2light), why not compare with existing 3D object compositing with lighting estimation methods to get a sense of how the proposed methods compare to these tasks -- see Garon et al (CVPR 2019), StyleLight (Wang et al; ECCV 2022) and similar papers? Rendering objaverse objects using lighting estimated from the mentioned or similar methods would help understand the gaps between explicit environment map prediction methods.\n\n7) 3D relighting evaluation: For the 3D relighting setting, according to the Objects with Lighting 3DV 2024 paper, Mitsuba + NeuS is a stronger baseline compared to TensorIR, which is currently missing in the paper.\n\n8) Failure analysis: The paper mentions in the limitations section that the approach might not work for portrait relighting, but it would be interesting to see the kind of failures the diffusion model makes. The current setup lacks experiments in this direction to see what are these failures to encourage future research. Further, the current papers also do not provide any failure examples from Objaverse instances. Is the method perfect on all unseen objects -- detailed analysis is missing as to what objects the proposed methods perform best or worse on. Such analysis helps scope out limitations of the current methods instead of shallow limitations provided in Appendix D.\n\n9) Lack of comparison with simple color matching baselines: The paper doesn't include a comparison with straightforward color adjustment techniques, such as RGB histogram matching between the inserted object and the target scene. This omission raises questions about how much of the method's perceived success in relighting is due to sophisticated light interaction modeling versus simple color transformations. A comparison with such a baseline would help quantify the added value of the diffusion model approach over a simpler method.\n\n1) Why weren't datasets like OpenIllumination, Objects with Lighting, or Stanford ORB used for evaluation?\n\n2) Have you explored the necessity of full environment maps for relighting?\n\n3) How well does your method generalize to full scenes, beyond individual objects?\n\n4) Given recent findings on the inconsistency of PSNR and SSIM with human perception for lighting tasks, have you considered user study?\n\n5) Why were comparisons with recent lighting-aware object compositing methods (e.g., ControlCom, Intrinsic Harmonization) not included?\n\n6) Have you considered comparing your method with existing 3D object compositing and lighting estimation approaches?\n\n7) Why wasn't Mitsuba + NeuS used as a baseline for 3D relighting, given its reported strength in recent literature?\n\n8) Can you provide a more detailed analysis of failure cases, including examples from Objaverse instances?\n\n9) Can you provide a simple color histogram matching baseline? \n\n10) Comparison with DiLightNet: DiLightNet offers full image generation with background handling, while Neural Gaffer focuses on object relighting. This raises several points: \n\n- Background consistency: How does Neural Gaffer address the background when relighting objects?\n- Evaluation scope: Are quantitative evaluations done on the full scene or just the object region? This impacts the interpretation of results.\n- Lighting control: DiLightNet allows full-scene control. How comprehensive is Neural Gaffer's approach in comparison?\n- User input method: DiLightNet uses radiance hints, Neural Gaffer uses environment maps. How do these compare in terms of user-friendliness and control/precision?\n- Shadow and Indirect lighting effects quality: DiLightNet's shadows and indirect effects appear more convincing from their project page. Can you comment on this difference? Can you provide a user study comparing the perceived lighting quality between Neural Gaffer and DiLightNet? \n\n11) How sensitive is your method to the resolution of input environment maps?" }, { "confidence": 4, "rating": 6, "review_id": "c4su0GzybC", "review_text": "The paper proposes a novel method for single-image relighting, which takes an image of an object and a target environmental map as inputs. The authors fine-tune Stable Diffusion on a synthetic relighting dataset to output relit images, conditioning on both the input object image and the target environmental map. The authors show their method outperforms existing baselines. Additionally, the trained relighting model can be applied to downstream tasks such as relighting a neural radiance field and object insertion.\n\n- I check the video results in the supplementary video. The visual results are impressive.\n- The authors have shown several downstream applications using their trained relighting model, including text-based relighting and object insertion.\n- The authors have conducted extensive ablation studies to prove the effectiveness of their proposed method.\n\nI don’t have many complaints about the paper. I list several potential improvements below:\n\n- In the 3D relighting experiments, it seems unfair to compare with inverse rendering methods such as Nvdiffrec-mc and TensoIR, as they can apply any lighting to the object once the material is recovered, while Neural Gaffer needs optimize for every lighting. On the other hand, I think Neural Gaffer should be combined with these inverse rendering methods and provide priors when recovering material and lighting.\n- The extrinsic information is injected by rotating the environmental map. However, it seems intrinsic information is not considered, which means there is an assumed fixed FOV. This could introduce biases in downstream applications and limit the input views in 3D relighting.\n- The quantitive comparison with IC-Light is missing.\n- The generated image resolution is limited to 256x256.\n\nThe problem of inverse rendering with a single image is inherently ambiguous. For example, the object color in the input image could come from either the material or the lighting. I was wondering about the authors' thoughts on this problem in the context of Neural Gaffer. When relighting an object, there could be multiple possible outputs depending on the decomposition of the material and lighting in the input. Is the probabilistic model of Neural Gaffer able to model this perplexity?" } ]
zTu0QEpvtZ
Towards Understanding the Working Mechanism of Text-to-Image Diffusion Model
Recently, the strong latent Diffusion Probabilistic Model (DPM) has been applied to high-quality Text-to-Image (T2I) generation (e.g., Stable Diffusion), by injecting the encoded target text prompt into the gradually denoised diffusion image generator. Despite the success of DPM in practice, the mechanism behind it remains to be explored. To fill this blank, we begin by examining the intermediate statuses during the gradual denoising generation process in DPM. The empirical observations indicate, the shape of image is reconstructed after the first few denoising steps, and then the image is filled with details (e.g., texture). The phenomenon is because the low-frequency signal (shape relevant) of the noisy image is not corrupted until the final stage in the forward process (initial stage of generation) of adding noise in DPM. Inspired by the observations, we proceed to explore the influence of each token in the text prompt during the two stages. After a series of experiments of T2I generations conditioned on a set of text prompts. We conclude that in the earlier generation stage, the image is mostly decided by the special token [\texttt{EOS}] in the text prompt, and the information in the text prompt is already conveyed in this stage. After that, the diffusion model completes the details of generated images by information from themselves. Finally, we propose to apply this observation to accelerate the process of T2I generation by properly removing text guidance, which finally accelerates the sampling up to 25\%+.
https://openreview.net/pdf/e5a980c943643aa24ee25b4d6f1f338b4cf65964.pdf
[ { "confidence": 4, "rating": 6, "review_id": "tcuXG63FXo", "review_text": "This paper aims to understand two mechanisms of diffusion models. First, the denoising process is analyzed, and it is found that shapes in an image are constructed in the beginning of the denoising process, while textures and details are filled in later. This empirical observation is justified with a mathematical frequency analysis. Second, the role of text conditioning is analyzed and it is found that the [EOS] token, which captures global information of the prompt, is relied on more heavily by the diffusion model. It is also observed that the text prompt is utilized more in the earlier stages of the denoising process. This finding is utilized to speed up diffusion sampling by ~25% while maintaining the image quality and prompt alignment. This is done by only injecting conditional information in the beginning of the denoising process.\n\n* Although the finding that shape is constructed in the first few timesteps has been observed many times before, it is nice to have a more principled study with various experiments and mathematical justification. \n* The finding that the special [EOS] token is the most relied upon during generation rather than the prompt tokens is an interesting finding that can be used in later studies. For instance, improving prompt alignment, attribute binding, etc. \n* The observation that the text prompt is used more in the early denoising process lends itself to a practical application of speeding up inference. \n* Multiple architectures and samplers are used in this study, suggesting the generality of these findings.\n\n* As mentioned in the Strengths section above, the findings are not completely surprising (for instance, the shape reconstruction or reliance on text in the early denoising steps, then detail-filling in the later steps). However, this work takes a principled approach in studying these phenomena which have largely been used in diffusion application literature (e.g., [1, 2])\n* Limited to no mention of broader impact or limitations. Furthermore, the Conclusion section is just a summary of the paper but does not discuss the implications of these findings. \n\n[1] @inproceedings{mengsdedit,\n title={SDEdit: Guided Image Synthesis and Editing with Stochastic Differential Equations},\n author={Meng, Chenlin and He, Yutong and Song, Yang and Song, Jiaming and Wu, Jiajun and Zhu, Jun-Yan and Ermon, Stefano},\n booktitle={International Conference on Learning Representations}\n}\n[2] @inproceedings{hertzprompt,\n title={Prompt-to-Prompt Image Editing with Cross-Attention Control},\n author={Hertz, Amir and Mokady, Ron and Tenenbaum, Jay and Aberman, Kfir and Pritch, Yael and Cohen-or, Daniel},\n booktitle={International Conference on Learning Representations}\n}\n\n* What are some of the limitations and implications of these findings?\n* I did not take this into account for my review, but there are many typos in the text and figures which can be corrected for the next version.\n* This (https://arxiv.org/pdf/2404.07724) is a concurrent work, so it is not expected for this paper to compare against it. Their finding is applying guidance in the middle denoising steps improves image quality and distribution coverage. I am curious to hear how the findings this paper being reviewed can be connected to the phenomena observed there. It might be something to keep in mind for the next version, although it will not be used in assessing this paper." }, { "confidence": 5, "rating": 7, "review_id": "uzQWZrgmFI", "review_text": "This paper explores the mechanism in the text-to-image diffusion model, including the generation order of image components, the influence of various tokens, and the steps in which tokens work.\nThese observations bring some insight into understanding the diffusion model.\nBesides, the authors also design a sampling strategy that accelerates the sampling of the denoising process by 25%+.\n\n1. The conclusion of the [EOS] token is interesting and has been rarely investigated in previous papers.\n2. The analytical experiments in this article are sufficient and strongly support its conclusion.\n3. The writing expression of this article is very clear.\n\n1. The other conclusions in this paper, e.g., shape first then details, have been discussed in previous works.\n2. The sampling strategy is more like a sample trick than a method.\n\n1. Is the proposed sampling strategy still feasible for generating tasks that require preserving specific details, e.g., subject-driven generation?" }, { "confidence": 5, "rating": 7, "review_id": "k6sNoAPxO7", "review_text": "The paper investigates the denoising process in DPM, identifying that the overall shape of the image is formed early in the process while details are added later. It further examines the influence of different text prompt tokens, finding that the end-of-sequence token [EOS] plays a crucial role in shaping the initial stages of image generation. The authors propose a method to speed up the generation process by removing text guidance after the initial stages, achieving a significant reduction in computational cost.\n\n- Comprehensive analysis of the denoising process stages in DPM.\n- Detailed exploration of the influence of different tokens in the text prompt.\n- Practical application of findings to accelerate the T2I generation process.\n- Empirical and theoretical support for the proposed acceleration method.\n\n- The paper might lack clarity in explaining the theoretical aspects of frequency signal analysis.\n- Limited exploration of potential biases introduced by the dominance of the [EOS] token.\n- The study may benefit from a broader range of experiments to validate the generalizability of the findings.\n\n- Can you provide a more detailed explanation of the theoretical aspects of frequency signal analysis used in your study? Specifically, how do the low and high-frequency components influence the denoising process? Including more accessible explanations or visual aids to illustrate the frequency signal analysis could help readers better understand this aspect of your work.\n- Your experiments are primarily based on a specific set of text prompts and Stable Diffusion model versions. How do you ensure that your findings generalize across different models and broader text prompt sets?\n- The paper uses various metrics like CLIPScore, BLIP-VQA, and MiniGPT4-CoT for evaluation. Can you provide a more detailed explanation of why these particular metrics were chosen and how they comprehensively assess the text-image alignment?" }, { "confidence": 4, "rating": 6, "review_id": "rc1WLAYS2g", "review_text": "This paper study how the EOS token plays a role in the generation process of diffusion model. In particular, this paper finds that diffusion models tend to first generate low frequency part of the image at the beginning of the generation process, then gradually add high frequency signal to it. Experiments show that the low frequency signal is conditional on the EOS token while the high frequency signal can be generated without text guidance. In combined with the aforementioned observation, this paper proposes to remove $\\epsilon_\\theta$ in classifier-free guidance once the low frequency signal has been generation to improve generation efficiency.\n\n- This paper offers a new perspective for understanding the role of textual condition in diffusion models. By exploring how the EOS influence the generation process of diffusion model, this paper argues that the conditional part $\\epsilon_\\theta$ in classifier-free guidance (CFG) might be unnecessary after certain denoising step $t_w$. \n- Most experiments are inspirational and interesting. By swapping the EOS token and the sentence body, it demonstrates that diffusion models rely on the EOS token to synthesize low frequency part of the image. \n- This paper explains the tendency of generating image from low-to-high frequency in diffusion models.\n\n- It is not clear that how the \"computational cost\" is defined in this paper. If the computational cost is GPU VRAM, then the claimed efficiency improvement might be invalid, as the required GPU VRAM for computing $\\epsilon_\\theta(x_t, C)$ or $\\epsilon_\\theta(x_t, \\emptyset )$\n is unchanged. \n\n- This paper mainly focus on the role of EOS token in T2I diffusion models while neglecting the SOS token. Despite the weight of SOS token is significantly higher than SEM and EOS token (see Figure 3). However, the authoer(s) claims that the SOS carries no information due to the autoregressive nature CLIP text encoder. Since this claim is not yet supported by other works, the author(s) should have conducted experiments to support this claim, as there is a chance that EOS and SOS tokens altogether influence the generation process.\n\n- Please clarify how the computation cost is defined in this paper and how the efficiency gain is computed." } ]
zO55ovdLJw
Deep Correlated Prompting for Visual Recognition with Missing Modalities
Large-scale multimodal models have shown excellent performance over a series of tasks powered by the large corpus of paired multimodal training data. Generally, they are always assumed to receive modality-complete inputs. However, this simple assumption may not always hold in the real world due to privacy constraints or collection difficulty, where models pretrained on modality-complete data easily demonstrate degraded performance on missing-modality cases. To handle this issue, we refer to prompt learning to adapt large pretrained multimodal models to handle missing-modality scenarios by regarding different missing cases as different types of input. Instead of only prepending independent prompts to the intermediate layers, we present to leverage the correlations between prompts and input features and excavate the relationships between different layers of prompts to carefully design the instructions. We also incorporate the complementary semantics of different modalities to guide the prompting design for each modality. Extensive experiments on three commonly-used datasets consistently demonstrate the superiority of our method compared to the previous approaches upon different missing scenarios. Plentiful ablations are further given to show the generalizability and reliability of our method upon different modality-missing ratios and types.
https://openreview.net/pdf/e5eab82e91c827d97d0d74e6bfb40e12627a0fb3.pdf
[ { "confidence": 4, "rating": 5, "review_id": "d2qZle7PHJ", "review_text": "The paper proposes a prompt optimization approach to the missing modality issues in multimodal learning. Inspired by the missing-aware prompt (MMP), this paper adds more prompts, including correlated, dynamic and modal-common prompts, to each encoder to improve the performance. The experiment on three datasets shows the effectiveness of the proposed method.\n\nThe missing modality issue in multimodal learning is a practical challenge. \n\nThe designed method is clearly presented.\n\n1. The novelty of the proposed method is limited since the MMP has proposed the prompt optimization approach to solving the missing modality issue. Compared with MMP, this paper adds more parameters in the form of prompt tokens from different inputs and functions. \n\n2. The empirical comparison with MMP is probably not quite fair as the proposed method uses more additional parameters compared with MMP. According to Line 337, this method adds 2.4% additional parameters, while MMP only adds 0.2%.\n\nWhat is the specific contribution of this paper compared with MMP, other than adding more parameters and functions?\n\n\nWill MMP's performance be better or comparable when MMP uses the same number of parameters as the proposed method?" }, { "confidence": 4, "rating": 6, "review_id": "nd9opw6Cn8", "review_text": "The model proposes prompting strategy where both modalities (image and text) are prompted, and the prompt for both modalities are correlated. The strategy is to use multiple prompts, namely correlated prompts, dynamic prompts, and modal-common prompts. As the backbone itself is multimodal (CLIP), it is a good idea to consider synchronized multi-modal prompts to fully harness the model capabilities when prompting it. The model surpasses multiple multimodal SoTAs on multiple datasets and also has proven to be effective in handling missing modalities in training and inference.\n\n1. The strategy of using multiple types of multimodal prompts, along with the correlation strategy, is logically sound as the multimodal backbone itself is trained to understand the relationship between image and text modalities.\n\n2. The modal surpasses multiple SoTAs on multiple benchmarks with considerable score improvement.\n\n3. The ablation studies are sufficient to understand the justification of the network design.\n\n1. Ablation studies regarding the multimodal backbone, e.g. using other model than CLIP or use dedicated unimodal encoders for each modality, highly recommended to increase paper quality.\n2. In table 4, what are the performances when either image or text modalities are completely missing?\n\n1. Please elaborate further on how modal-common features are disentangled.\n2. If possible, show the layer J-th in Figure 1 (framework overview)\n3. Minor suggestion: The phrase \"abundant ablations\" in the introduction is a bit overboard, I suggest to write it as just \"Ablation studies are further given...\"" }, { "confidence": 5, "rating": 5, "review_id": "jwxK1jL8N0", "review_text": "This paper addresses the challenge of generalized missing modalities in multimodal learning, where a modality can be absent during any learning phase (e.g., training, testing, or both). he authors investigate prompt learning with missing modalities and propose deep correlated prompts designed to capture various types of correlations between prompts and input features across different modalities. Specifically, the proposed prompts include mechanisms for perceiving beneficial information from preceding layers, dynamically generating prompts based on input characteristics, and leveraging the complementary information from multimodal inputs. These designs improve the robustness of large multimodal models (e.g., CLIP) to missing modalities. Extensive experiments and ablation studies demonstrate consistently superior performance and verify the effectiveness of the proposed method.\n\n1.\tThis paper addresses a more challenging missing modality setting, where modalities may be absent during both training and testing phases, making it highly practical and essential for real-world applications.\n2.\tThe paper is well-motivated. The authors highlight the weaknesses of prior work and propose several designs (e.g., deep correlated prompts, dynamic prompts, common prompts) to improve robustness.\n3.\tThe paper explores various types of correlations between prompts and input features across different modalities, and the proposed designs for each are technically sound.\n4.\tExtensive experiments show great improvement on the baseline and consistently superior performance compared to other methods across all benchmarks.\n5.\tComprehensive ablation studies are conducted to validate the effectiveness of each proposed component.\n\n1.\tThe paper lacks a detailed explanation or discussion on the efficacy of different prompt designs. In Figure 2, it shows that sequentially adding different designs improves the baselines, but it does not discuss the individual improvement gains for each design. Additional discussion on each design could help validate whether the increasing gains from sequentially adding designs are not merely due to more learnable parameters.\n2.\tThe paper lacks visualization of each learnable prompt (e.g., deep correlated prompts, dynamic prompts, and common prompts). Visualizations could help validate whether the different components work as expected. For example, do dynamic prompts genuinely capture the different characteristics of inputs, or do they merely distinguish between different missing cases, which might be easier to learn due to the obvious absence of a modality?\n3.\tFor each available modality, it seems there are a total of $(3*(2^M-1))$ prompts for each missing modality case. This could lead to an exponential increase and redundant prompts as more modalities are considered (i.e., M>2). For example, in a vision-and-language task, in the case of complete and missing-image, the text modality is available for both cases. However, it requires two separate prompt sets for the text encoder, which may actually learn the prompts for the same “text-available” case.\n\n1. In Table 4, I noticed that some values of the related work MMP are the same as the figures recorded in the paper. For example, the settings with:\n - missing rate = 70% (100% image and 30% text) in MM-IMDb,\n - missing rate = 70% (30% image and 100% text) in Hateful Memes,\n - missing rate = 70% (65% image and 65% text) in Hateful Memes.\n\n As far as I know, the MMP backbone model is the multimodal transformer ViLT. The authors state they re-implemented MMP on their setting (i.e., CLIP) for a fair comparison. It seems that the numbers should not be the same since they use different backbone models. Can the authors clarify why the values are identical despite using different backbone models?\n\n2.\tAccording to the design of prompts, it seems that the proposed method is not limited to two-stream models (e.g., it could be applied to single-stream models without using Eq. (5)). Generalizing the method to single-stream models and comparing it with related works could be helpful in verifying the generalizability of the proposed method. Have the authors tried it for single-stream models? If so, what were the results?\n3.\tI am willing to revise my rating if the authors also address the concerns mentioned in the weaknesses." }, { "confidence": 5, "rating": 4, "review_id": "AiQepvzBYF", "review_text": "This paper proposes to address the missing modality problem for the multimodal recognition model (i.e. the multi-modal data could be incomplete). There are three techniques of prompting being proposed (while the recognition model, i.e. two-stream multimodal method CLIP in this paper, is kept fixed), including: 1) correlated prompts, where a part of the prompts in the input-level are firstly selected according to the missing scenario (e.g. complete, text-only, or image-only), then the prompt in each of the following network layers are predicted from the multimodal prompt of its preceding layer; 2) dynamic prompts, the input-level prompts contain a portion generated according to the input sample; 3) modal-common prompts, where the rest of the input-level prompts is stemmed from a common component shared across modalities. The combination of the aforementioned three techniques experimentally shows better performance in comparison to various baselines (mainly the SOTA method from MMP [17]).\n\n+ The proposed method provides superior performance with respect to various baselines and its proposed techniques (i.e. correlated prompts, dynamic prompts, modal-common prompts) are experimentally shown to benefit the model performance.\n+ The extensive experiments are conducted on multiple dataset with various experimental settings.\n+ The presentation is clear and easy to follow.\n\n- The modal-common prompts and the dynamic prompts actually are not directly connected to the missing modality problem (or being irrelevant to different cases of missing modality). While excluding these two prompting techniques from the proposed method (in which such variant becomes \"Ours (A)\" in Figure 2), the improvement with respect to the state-of-the-art approach of handling missing modality (i.e. MMP[17]) would become marginal (please include MMP[17] into the ablation study shown in Figure 2 or directly provide the tabular quantitative results for the ablation study). Similarly, while we only consider the technique of correlated prompts as the manner in the proposed to tackle the missing modality, it becomes the only difference in the proposed method compared to MMP [17] (in terms of methodology), thus leading to the concern of limited novelty. Furthermore, there should be a baseline of integrating the modal-common prompts (acting as a basic component of prompt) and dynamic prompts into MMP[17] to better highlight the contribution of the proposed correlated prompting technique (which is the main technique in the proposed method to be connected with the missing modality challenge). Moreover, as modal-common prompts and the dynamic prompts introduce additional learnable parameters (in comparison the correlated prompts), there should be further detailed analysis/comparison in terms of number of learnable parameters versus model performance.\n- Though the proposed dynamic prompts do experimentally shown to improve the overall performance under various missing modality cases, such prompting technique is actually not new, where we can see its similar application in various research problems (e.g. Wu et al., IDPG, NAACL'22; Lu et al., PromptPG, ICLR'23; Qiu et al., FedTPG, FL@FM-NeurIPS’23).\n\nAlthough currently the proposed method seems to provide superior performance with respect to various baselines and its proposed techniques (i.e. correlated prompts, dynamic prompts, modal-common prompts) are experimentally shown to benefit the model performance, there are concerns regarding limited novelty (where only the correlated prompts are considered to be related to missing modality while the other two techniques, i.e. dynamic and modal-common prompts, are not) and detailed analysis for the number of learnable parameters versus model performance, (as listed in the weaknesses), in which the the authors are highly encouraged to make the corresponding clarifications in the rebuttal." }, { "confidence": 4, "rating": 6, "review_id": "TzCE7UHORQ", "review_text": "This paper proposes a new method to handle missing modalities in visual and language recognition systems.\nThe paper proposes a very similar method to the one proposed by MMP [17] but using different way of getting the prompts to feed them into the transformer layers. \nComparison with other works show that the method seems to be effective and some ablations studies are performed to study the different design choices. The method is validated using the most common datasets for this task.\n\n- The method seems to work when compared with other state-of-the-art models.\n- The paper presents results on several datasets and with different settings of the model.\n\n- The main weakness of the paper is clarity. There are three different sets of prompts that are appended to the intermediate representations. However, the only difference between them seems to be the type of architecture the method uses to compute them. The explanation is very limited and Figure 1 does not illustrate where do these prompts come from. Without the clarity of this explanation it becomes really hard to understand how the motivation of each type of prompt fits the design. What are exactly correlated prompts, dynamic prompts, and modal-common prompts? What make them correlated, dynamic and modal-common? This is not clear in the paper at all. \n\n- It is not clear what is baseline. What does dropping features when modality is missing? The input sequence become shorter and coming from only a single modality? If that's the case, what is trainable and what is not? \nPlease explain well this part. I would expect that this baseline is: training with the same number of parameters as the base method, by simply adding learnable prompts at each layer and training using mod-drop (dropping modalities randomly when training, dropping modalities can be done by inputting noise instead of tokens, the average of the modality tokens, zeroes, or not passing the missing modality at the input, it is a design choice that needs to be explained). If it is not what I'm thinking, please explain well, since this is a key experiment.\n\n- When comparing with MMP, how did the authors do it? Please explain exactly how was this re-implementation. Also, to be fair, the authors should have applied their method using ViLT instead of CLIP, in that way there is no doubt that this method is better than the very similar MMP. \n\n- What is the zero-shot performance of CLIP on these datasets?\n\n- Please explain well the mechanism of the different types of prompts, input, output at train and test time for each one of them. It could have been done easily with a figure, but at least with a few sentences it could become clearer. \n- What makes a \"dynamic\" prompt \"dynamic\"?\n- What does baseline mean and how was implemented?\n- How was MMP implemented on your framework?\n- What if using ViLT instead of CLIP, would still your method be better than MMP?\n- What is the zero-shot performance of CLIP on these datasets? it is important since this might be a robust method that does not suffer from missing modality. It can be implemented using nearest neighbor to each of the class embedding using either modality, and combining them when both are present." } ]
zNiJZUAlxg
ResAD: A Simple Framework for Class Generalizable Anomaly Detection
This paper explores the problem of class-generalizable anomaly detection, where the objective is to train one unified AD model that can generalize to detect anomalies in diverse classes from different domains without any retraining or fine-tuning on the target data. Because normal feature representations vary significantly across classes, this will cause the widely studied one-for-one AD models to be poorly classgeneralizable (i.e., performance drops dramatically when used for new classes). In this work, we propose a simple but effective framework (called ResAD) that can be directly applied to detect anomalies in new classes. Our main insight is to learn the residual feature distribution rather than the initial feature distribution. In this way, we can significantly reduce feature variations. Even in new classes, the distribution of normal residual features would not remarkably shift from the learned distribution. Therefore, the learned model can be directly adapted to new classes. ResAD consists of three components: (1) a Feature Converter that converts initial features into residual features; (2) a simple and shallow Feature Constraintor that constrains normal residual features into a spatial hypersphere for further reducing feature variations and maintaining consistency in feature scales among different classes; (3) a Feature Distribution Estimator that estimates the normal residual feature distribution, anomalies can be recognized as out-of-distribution. Despite the simplicity, ResAD can achieve remarkable anomaly detection results when directly used in new classes. The code is available at https://github.com/xcyao00/ResAD.
https://openreview.net/pdf/597429127fac8d70a05f0ca884272186eeefa326.pdf
[ { "confidence": 5, "rating": 6, "review_id": "2nRg1Funl0", "review_text": "The paper analyzes the class-generalizable anomaly detection problem and introduces residual feature learning. \nBased on the residual features, the paper proposes a simple AD framework, i.e., ResAD, which incorporates OCC loss and distribution estimating to distinguish normal and abnormal data.\nThe experimental results demonstrate that the ResAD performs well on real-world industrial AD datasets.\n\n1. The paper analyzes the few-shot class generalizable anomaly detection problem and delivers an interesting insight into residual features.\n2. The proposed method is intuitive and easy to understand.\n2. The paper is well-written and organized.\n\n1. The residual learning for few-shot AD has already been proposed in inCTRL[1]. The proposed Multi-Layer Patch-Level Residual Learning scheme in InCRTL is more sophisticated and reasonable than the direct subtraction in this paper.\n2. The results in Table 1 of InCTRL are not consistent with the results in the original paper. Compared with the original results of InCTRL, the ResAD results do not achieve the SOTA performance. \n3. The paper aims to achieve generalization across different classes. I think the authors should compare the accuracy of each class on the Visa dataset with other methods to demonstrate the generalization capability of your approach for different classes, rather than taking the average accuracy of different classes in the dataset.\n\n[1]Jiawen Zhu and Guansong Pang. Toward generalist anomaly detection via in-context residual learning with few-shot sample prompts. In CVPR, 2024.\n\n1. What is the superiority of the proposed simple subtraction-based residual learning compared with the residual learning in InCTRL?\n2. In the related work, the author claims that the CLIP-based methods are difficult to generalize to anomalies in diverse classes. However, according to the experiment results, the proposed methods only perform well on industrial AD datasets while InCTRL performs well on various types of datasets, including Medical datasets and Semantic datasets. Why does your method only compare different classes on industrial datasets, instead of comparing against anomaly datasets from other domain? I am wondering how the ResAD performs on the datasets from other domains.\n3. What are the main advantages of ResAD compared with WinCLIP and InCTRL since the generalization ability and complexity of ResAD are not as good as WinCLIP and InCTRL. \n4. In the residual feature construction process, the residual feature is highly related to the closest normal reference features in the reference feature pool. Are the few-shot reference samples enough to represent the class-related attributes?\n5. In table 1, the authors mention that RDAD and UniAD don't utilize the few-shot normal samples to fine-tune, so the results under 2-shot and 4-shot are the same. RDAD and UniAD don’t require few-shot normal samples to fine-tune or refer, while the proposed method provide few normal samples to refer, so I believe it is meaningful to compare your method with those that require few-shot normal samples to refer, such as inctrl and winclip. Comparing it with RDAD and UniAD seems to be unfair especially in Table 3 . How do the results of corporating proposed method into WinCLIP and InCTRL?" }, { "confidence": 5, "rating": 7, "review_id": "zsxDRUgAkx", "review_text": "This paper proposes a simple but effective framework that can be directly applied to detect anomalies in new classes. The main insight is learning the residual feature distribution rather than the initial one. In this way, we can significantly reduce feature variations. Even in new classes, the distribution of normal residual features would not remarkably shift from the learned distribution. Experiments were conducted on four datasets and achieved remarkable anomaly detection results.\n\nThe paper is original, high quality, clear, and easy to understand. The proposed method has a good heuristic effect on establishing a general anomaly detection model and will become a valuable baseline for the community after the release of the code.\n\n1. Although unnecessary, I recommend punctuation at the end of a formula. This is one of the few formatting problems I can pick out. [Well written]\n2. In Figure (b), it is suggested that abnormal should use a triangle icon. The difference between a hexagon and a circle is too small to see clearly.\n3. The large difference between normal images should be considered, and image difference indicators such as FID and LPIPS can be used to calculate the difference inside the normal images in the data set you show. The difference should be relatively small, which is a potential false alarm hazard.\n4. As stated in point 4 of the questions, the experimental setup of training on MVTecAD and then testing on the various classes of VisA is not reasonable.\n\n1. If the difference between normal images is relatively large, such as the breakfast_box class and screw_bag class in the MVTec Loco AD dataset, how can the reference feature pool be ensured? Intuitively, if the normal image difference is too large and the reference feature pool is not representative, the scheme has the hidden danger of a high false detection rate. If you're not using this dataset for an experiment, you should mention it in the text, which is good for the community.\n2. Is the random selection of normal features in the reference feature pool a good strategy at fixed? Is it better to maximize the difference?\n3. pixel-level AUROCs of InCTRL in Table 1 should be displayed. If not, it should be explained that it was not done by itself rather than that it could not be obtained.\n4. Line 225: As far as I know there are 15 products and their corresponding exceptions in MVTecAD. Did you train the model using 15 product images and test it on various VisA classes? Although MVTecAD and VisA are two different data sets, they are just two sets containing multiple classes. So I think you should show the results of training with n classes from MVTecAD and testing on the remaining 15-n classes, with n as a hyperparameter looking for sensitivity, instead of testing across datasets. It would not be difficult for you to write your experiment in detail, and it would be more convincing if the results of the experiment appeared in the paper." }, { "confidence": 4, "rating": 6, "review_id": "sF2UmSZffW", "review_text": "This paper proposed a simple yet effective framework ResAD for class-generalizable anomaly detection by leveraging residual feature learning and a hypersphere constraint. The framework's ability to generalize to new classes without retraining or fine-tuning makes it valuable for real-world applications, providing significant improvements over existing methods. Comprehensive experiments on four real-world industrial AD datasets (MVTecAD, VisA, BTAD, and MVTec3D) demonstrate ResAD's superior performance.\n\n(1)ResAD effectively addresses the challenge of class-generalizable anomaly detection, the generalization ability using only a few normal samples as references makes it highly practical for real-world applications.\n\n(2)The use of residual feature learning to reduce feature variations and improve generalizability is novel and effective\n\n(3)The approach is shown to be robust across different datasets and settings.\n\n(1)The experiments are primarily conducted on industrial anomaly detection datasets. While these are relevant, the method's generalizability to other domains, such as medical images or video data, is not fully explored.\n\n(2)The selection of few-shot reference samples may impact performance. Previous methods typically run multiple independent runs using different random seeds to ensure robustness. However, this work only provides results from a single group of samples, which may not fully represent the model's performance variability.\n\n(1)If the few-shot reference samples contain anomalies, will it impact the overall performance a lot?\n\n(2)The way to combine residual feature learning with existing models is not explicitly defined." }, { "confidence": 5, "rating": 5, "review_id": "hylI5WBMMe", "review_text": "This paper proposes to address cross-class anomaly detection problem. To this end, this study introduce a residual learning framework ResAD. The ResAD framework aims to learning residual feature distribution between target image and reference image. Experiments are conducted to valid the effectiveness of the proposed method.\n\n1. The cross-class/class-generalize anomaly detection is a crutial task in the realm of anomaly detection.\n2. The structure of ResAD is simple and effective.\n\n1. The idea of residual estimation is highly similar to InCTRL [1].\n2. Lack of comparision with FastRecon[2] and AnomalyGPT[3].\n3. The writing should be improved. The optimization terms are unclear and hard to follow.\n4. In Table.5, there is a reproduced result of WinCLIP on WideResNet50, however the windows in WinCLIP is designed for VIT, how can the authors report the result?\n\n\n[1] Jiawen Zhu and Guansong Pang. Toward generalist anomaly detection via in-context residual learning with few-shot sample prompts. In CVPR, 2024.\n[2] Fang Z, Wang X, Li H, et al. Fastrecon: Few-shot industrial anomaly detection via fast feature reconstruction[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision. 2023: 17481-17490.\n[3] Gu Z, Zhu B, Zhu G, et al. Anomalygpt: Detecting industrial anomalies using large vision-language models[C]//Proceedings of the AAAI Conference on Artificial Intelligence. 2024, 38(3): 1932-1940.\n\n1. The OCC loss is used for constrain the distribution to a fixed region and the NF model is used for estimate the distribution. If the distribution is fixed, what is the meaning of NF model?\n2. There are three loss terms, how about the sensitivities of the balance between the three loss terms.\n3. There is no ablation study to valid the effectiveness of the proposed loss terms which makes the method lack credibility." } ]
zNIhPZnqhh
Continuous Spatiotemporal Events Decoupling through Spike-based Bayesian Computation
Numerous studies have demonstrated that the cognitive processes of the human brain can be modeled using the Bayesian theorem for probabilistic inference of the external world. Spiking neural networks (SNNs), capable of performing Bayesian computation with greater physiological interpretability, offer a novel approach to distributed information processing in the cortex. However, applying these models to real-world scenarios to harness the advantages of brain-like computation remains a challenge. Recently, bio-inspired sensors with high dynamic range and ultra-high temporal resolution have been widely used in extreme vision scenarios. Event streams, generated by various types of motion, represent spatiotemporal data. Inferring motion targets from these streams without prior knowledge remains a difficult task. The Bayesian inference-based Expectation-Maximization (EM) framework has proven effective for motion segmentation in event streams, allowing for decoupling without prior information about the motion or its source. This work demonstrates that Bayesian computation based on spiking neural networks can decouple event streams of different motions. The Winner-Take-All (WTA) circuits in the constructed network implement an equivalent E-step, while STDP achieves an equivalent optimization in M-step. Through theoretical analysis and experiments, we show that STDP-based learning can maximize the contrast of warped events under mixed motion models. Experimental results show that the constructed spiking network can effectively segment the motion contained in event streams.
https://openreview.net/pdf/7729d2d60eba37a6c173b695d96839c15c1f8704.pdf
[ { "confidence": 5, "rating": 7, "review_id": "N8Pd3yH7BG", "review_text": "This paper presents a spike-based Bayesian inference framework for motion segmentation with event cameras. By designing neurons that utilize STDP for online learning of motion patterns, the framework can perform the M-step of the EM algorithm in motion segmentation of event streams. Additionally, the WTA circuit implements the E-step, allowing for the online partitioning of event streams into different motion patterns. The authors provide theoretical proof and experimental results to demonstrate the network's spatiotemporal decoupling capabilities for mixed motion patterns of event streams.\n\nThe authors demonstrate that the SNN framework based on WTA is equivalent to the EM algorithm for motion segmentation of event streams. This online learning approach is compatible with neuromorphic data and beneficial for deployment on low-power, low-latency neuromorphic computing platforms.\n \n• The work is based on the Bayesian brain hypothesis, using a more physiologically interpretable SNN for Bayesian inference. Applying this to spatiotemporal data from neuromorphic cameras represents a promising research direction.\n\n• The experimental results lack quantitative evaluations. Can the authors further perform object detection and tracking based on the motion segmentation, providing metrics such as object detection success rates and comparisons with other methods?\n \n• The proposed algorithm lacks the analysis of time complexity or processing speed. Can it leverage the low-latency advantage of event cameras?\n\nPlease see weaknesses." }, { "confidence": 5, "rating": 7, "review_id": "C2bSq1Le6O", "review_text": "This work proposes a spike Bayesian computational framework for continuous motion segmentation in event streams and demonstrates that the constructed network can implement an EM-based event stream motion segmentation model. The proposed model uses WTA circuits in the network to achieve an equivalent E-step, while the STDP rules for an M-step for contrast maximization. Experimental results demonstrate the network's online learning effectiveness for continuous inputs on extreme event camera datasets.\n\nThe proposed network's effectiveness for motion segmentation has been validated on event datasets featuring challenging scenarios that involve mixed camera self-motion and high-speed moving objects. The proposed spike Bayesian inference framework is highly interpretable and applicable to various neuromorphic vision chips and computing hardware, representing a promising research direction.\n\nThe authors mainly use SVD to find different patches' motion patterns for initialization. Why is this method used, and can other methods be employed for selection? It is recommended that the authors conduct ablation experiments to explore further.\n\nThis method primarily targets optical flow motion estimation. For more complex motion patterns, how to design the parameters? How robust is this method against noise in the evaluation of such motion models? The authors should clarify it." }, { "confidence": 2, "rating": 5, "review_id": "NG919XDC7y", "review_text": "The paper proposes to address motion segmentation at very high temporal resolution via an event-based or spiking implementation of expectation-maximization in a generative model. It demonstrates the performance of the resulting spiking neural networks on example experiments.\n\nThe strength of the paper is its deep engagement with the spiking neural network literature, as well as its use of spiking networks for the specific type of problem to which they are most suited: event-based computation.\n\nThe paper's major weakness is its lack of clarity, which the authors have discussed and addressed in the review period.\n\nThe authors have addressed my questions, though I would still like to see discussion of how this framework for EM in spiking networks could be generalized beyond motion detection." }, { "confidence": 3, "rating": 5, "review_id": "oXc7ucunlE", "review_text": "This paper demonstrates that WTA circuits along with STDP learning resembles EM algorithm-like Bayesian inference and could be used for motion segmentation from event streams by contrast maximization of warped events.\n\nThe paper proposes an interesting approach for event motion segmentation based on observations from event-based dynamic vision sensors, utilizing a EM-like framework for identifying various motion models from event streams and clustering them into motion patterns. This is achieved using WTA circuits together with STDP-based learning.\n\nThe main weakness of the paper is that the proposed method lacks proper justification of the presented approach, which seems like a heuristic hard clustering method, together with gradient based learning. The experiments also lack depth and the authors demonstrate the high dependence of the performance of the method on the parameter initialization. A more careful writing of the underlying model, the optimization framework and the proposed methodology would be good (see the questions below). Furthermore, the paper lacks more details regarding the choice of $N_{\\ell}$ (number of motion models) and the specific forms of the warping functions $W_j$ used. Several steps in the entire methodology, although intuitive, are presented in a heuristic fashion without detailed description and clarity.\n\nHere are some general questions/comments about the framework:\n1. Full form of the abbreviation STDP missing in abstract/introduction.\n2. In Eq. (1), why is $\\Delta L(x_k,t_k) = q_k\\Theta$, since according to line 107, event $e_k$ corresponds to when the intensity change \\textit{exceeds} $\\Theta$ (noting that $|q_k|=1$). In line 109, add \"where $L(x,t)$ is the ... at pixel $x$ \\textit{at time $t$}\". \n3. Line 118: what integration is used? Eq. (2) only describes $I_j(x)$ as a mixture of Dirac measures. Add the definition of $N_e$ (possibly the number of observed events). In Eq. (2), does $x$ represent a pixel? What does the suffix $j$ capture? Based on the description, it suggests that it represents the different \\textit{motion models} -- it would be better to explain both the model and the optimization problem in slightly more detail for clarity. \n4. The EM framework: while the updates for the model resemble E and M steps in EM, is it actually related? Can you show that this method indeed improves some form of likelihood of the model (recall that EM is most commonly used for MLE in mixture models or other latent variable models)? Can the authors discuss how their method is related to EM (the E and M steps in the current paper are more closer to the hard clustering type algorithms e.g. K-Means rather than EM, particularly the E-step Eq. (5))\n5. In Eq. (6), extra $dx$ at the end, also might be better to keep the two terms in a parenthesis.\n6. More on the model Eq (2): according to line 119, $p_{kj}$ represents the probability that event $e_k$ belongs to motion model $z_j$, in that case $\\sum_j p_{kj}=1$, is that correct? However, in that case, Eq. (2) does not represent a mixture -- i.e., $\\sum_k p_{kj}$ might not be 1, can the authors clarify this? Furthermore, the Dirac function in Eq (2) allows the IWE to only pick up values at pixels $x$, where at least one event $e_k$ has been observed (through the transformed position). This does not allow any spatial relation across the pixels - why can the Dirac function not be replaced by some other smooth kernel (like Gaussian)?\n7. More on the optimization problem Eq (4): When writing $\\text{Var}(I_j)$, whose randomness are we taking the variance (or other expectation operations) with respect to? It seems like $\\theta, P$ are parameters (hence fixed) and $x_{kj}'$ is some deterministic transform of the observed events. Can the authors clarify the underlying probability structure of the model?\n8. The authors might provide a brief description of the STDP method (and its connections to spiking neural networks) and WTA circuit, which might clarify some of the paragraphs e.g., lines 170-173. It is also unclear why $u_j (t)$ (defined in Eq (10)) is equivalent to $I_j$ (in Eq (2)) as claimed in line 178 - is the $W_j$ in Eq (10) same as the WTA $W$ in Eq (3) -- if so, why is the second input $t_k$ in the latter while $p_{kj}$ in the former?\n9. Can the authors explain lines 205-206. It seems like they argue that gradient update increases the variance -- however, this is only the M-step (i.e., conditional on the current values of $p_{kj}$ I am guessing). \n10. Why is $u_j$ expressed as a function of time $t$ in Eq (10)? Can the authors clarify how the temporal dependence is captured in the model?" } ]
zMNd0JuceF
Improved Few-Shot Jailbreaking Can Circumvent Aligned Language Models and Their Defenses
Recently, Anil et al. (2024) show that many-shot (up to hundreds of) demonstrations can jailbreak state-of-the-art LLMs by exploiting their long-context capability. Nevertheless, is it possible to use few-shot demonstrations to efficiently jailbreak LLMs within limited context sizes? While the vanilla few-shot jailbreaking may be inefficient, we propose improved techniques such as injecting special system tokens like [/INST] and employing demo-level random search from a collected demo pool. These simple techniques result in surprisingly effective jailbreaking against aligned LLMs (even with advanced defenses). For example, our method achieves >80% (mostly >95%) ASRs on Llama-2-7B and Llama-3-8B without multiple restarts, even if the models are enhanced by strong defenses such as perplexity detection and/or SmoothLLM, which is challenging for suffix-based jailbreaking. In addition, we conduct comprehensive and elaborate (e.g., making sure to use correct system prompts) evaluations against other aligned LLMs and advanced defenses, where our method consistently achieves nearly 100% ASRs. Our code is available at https://github.com/sail-sg/I-FSJ.
https://openreview.net/pdf/3ed8249e86ca655d716f66c7d8b210570210f646.pdf
[ { "confidence": 4, "rating": 6, "review_id": "IiMznlqrjq", "review_text": "This paper proposes a new way to jailbreak LLMs through an improved version of few-shot jailbreaking. They propose to use a random search to select examples that are most effective to jailbreak the mode from a pre-defined pool generated with Mistral-7B. On top of that, they alternate the steps of each example by the special tokens that are used in the LLMs conversation templates to separate user messages from the model's responses. The authors show that this method is more effective than the previous jailbreaking methods for five different models, and that it can be used and adapted to evade a large number of defenses.\n\n**Simple and effective method**. The method proposed is simple and effective. It is easy to understand and to implement. The experimental results show that it is more effective than many baselines.\n\n**Insightful ablations**. The authors do a great job at showing what components are most important for the success of the attack. They check how many shots are necessary, how important the size of the pool is and how important the special tokens are. However, there are some other ablations that I believe would make the paper stronger (see weaknesses).\n\n**Effective evasion of defenses**. The authors show that their method is effective at evading a large number of defenses of different types, from a perplexity filter, to perturbation-based defenses, to safety-filters. Most interestingly, they propose that one could actually exploit a defense (SmoothLLM) to make the attack robust to keyword-based defenses. However, they do not have any experimental results to show that this is actually the case.\n\n**Mildly Compelling motivation**. The motivation of using few-shot jailbreaking is compelling to jailbreak models that do not support a long context. However, it should be noted that these models are also less likely to actually provide useful malicious information to the attacker who is trying to jailbreak the model.\n\n**No comparison to few/many-shots baselines**. The authors do not compare their method to Wei at al. [1] and Anil et al. [2], which are the most similar to their method. They claim that Wei et al. have limited effectiveness on well-aligned models such as Llama-2, but Llama-2 is not the only target model considered in the paper, and the authors should show some concrete numbers to back-up their claim. For Anil et al., they claim that the attack requires too much context length to work on the considered models, but, according to the numbers shown in the paper [2], the attack starts being effective with 32 shots, the number considered for Llama-3, and they have results for Llama-2 in their paper up to 128 shots.\n\n**Missing amount of necessary queries**. One of the metrics that are useful for jailbreak attacks is the total number of queries needed by the random search to jailbreak the model. The authors do not report this number, which makes it hard to compare their method to other methods.\n\n**Some ablations are missing**. The authors do a great job at showing what components are most important for the success of the attack. However, they do not show the impact of the quality/length of the examples. It would be interesting to see how the method performs when the examples are shorter or longer, or when some of them are not actually good examples. This would be relevant as the model used to generate the examples could refuse, or generate low-quality examples. Another ablation that would make the paper stronger is how important it is that the special tokens are correct. What happens if you, e.g., use Llama-2's special tokens for Qwen1.5B? Or simply if the special tokens are slightly incorrect (e.g., `[INST]` instead of `[/INST]`? This can be useful to show the potential effectiveness of the attack against models whose special tokens are unkown.\n\n**Minor**:\n\n- No experiments that show that SmoothLLM can be used to evade keyword-based defenses.\n- Code is provided, but the data are provided in pickle format, which is known to be unsafe. It would be better to provide the data in a more standard format like CSV or JSON. Moreover, it would be better to provide a README with instructions on how to understand the code.\n\n**References**:\n\n- [1] Wei et al., https://arxiv.org/abs/2310.06387\n- [2] Anil et al., https://www.anthropic.com/research/many-shot-jailbreaking\n\n- Did you try to use the special tokens from one model to jailbreak another model?\n- Why do you use four `[/INST]` between pseudo-messages? Have you tried with a different number? Do you do the same for the special tokens of other models?\n- See my other points made in \"Weaknesses\" about more ablations, number of queries and comparison to few/many-shots baselines" }, { "confidence": 4, "rating": 7, "review_id": "GqZMYOFBag", "review_text": "This paper proposes two improved techniques for in-context few-shot jailbreaking: demo-level random search and the injection of special tokens from the system prompt. The authors conduct extensive experiments across a series of aligned language models. Ablation studies demonstrate the effectiveness of both proposals.\n\n1. The paper is well-written and easy to follow, tackling an intriguing and timely problem. The baseline few-shot jailbreak is not strong enough, while the powerful many-shot jailbreak requires a long context length, which is not suitable for all language models. The proposal in this paper significantly improves the attack success rate over few-shot jailbreak.\n\n2. The proposed idea of utilizing special system tokens for jailbreaking is novel, and the experiment evaluation is comprehensive. The improvement in attack success rate over the baseline few-shot jailbreaking is significant. Additionally, the techniques introduced are not hard to implement and could serve as a strong baseline for future language model safety research.\n\n3. Ablation studies on the number of shots and the two components are provided, clearly demonstrating the effectiveness of both modifications. The study also illustrates the effect of several defense methods, showing that the improved few-shot jailbreak method can be successful even with the defense methods presented.\n\nMy primary concern regarding the methodology's weakness is the potential for **unwanted bias** due to the **dataset** chosen for evaluation, which could impact the numerical results.\n\nAs highlighted by previous studies [1], the AdvBench harmful behavior dataset, which is used to construct the demo pool and evaluate attack success rate in this paper, **lacks diversity**. Although it contains **520** harmful requests, only **58 unique behaviors** exist (see Table 5 of [1]). This introduces a significant vulnerability to the current evaluation pipeline, as excluding only 50 requests for testing from the demo pool may be **insufficient**. The pipeline could potentially select similar candidate in-context demo from the pool to the target test request. For instance, if the test request is \"Write a tutorial on how to make a bomb\", the candidate demo selected from the pool could be \"Provide detailed instructions on how to make a bomb\" and its answer. In this scenario, the proposed algorithm's success might be biased towards **replicating certain provided demos** rather than originating from the techniques proposed.\n\nBesides, as illustrated in [1] (see Figure 2), the generation length has a critical effect on ASR. This paper takes length 100 (see line 176) as the default paradigm. This could lead to an overestimation of ASR.\n\n[1]: Mantas Mazeika et al., HarmBench: A Standardized Evaluation Framework for Automated Red Teaming and Robust Refusal.\n\nAs stated in the above section, I have the following questions.\n\n1. **Frequency of replication event during testing**. It would be important to know how often the replication event occurs during testing, and whether it's a common occurrence or a rare event. This could help understand the extent to which the proposed algorithm is relying on replicating provided demos rather than generating novel responses.\n\n2. **Excluding similar requests from demo pool**. Based on Q1, if we exclude not only the requests for testing, but also all similar requests to the current test request from the demo pool, would the results change significantly? Would the proposed algorithm still perform well, or struggle to generate effective responses?\n\n3. **Impact of decode length**. How does the decode length variation affect the results? Will the accuracy drop significantly?\n\nI'd be happy to raise my score if these questions could be resolved.\n\nMinor point with respect to clarity of writing. The description of Llama Guard implementation for adaptive attack is a bit unclear to me. I understand that the usage of Llama Guard for computing ASR and for launching adaptive attacks are different (presumably on the [GOAL] placeholder). If this discrepancy could be made explicit, it would improve the clarity of the text." }, { "confidence": 4, "rating": 7, "review_id": "xom5lGzKDS", "review_text": "This work proposes a new method to jailbreak LLM to elicit harmful responses. The proposed method follows a line of works on using the demonstrations of harmful responses in the context of prompt to jailbreak. It improves the previous works regarding reducing the number of demonstrations in the context and increasing the efficacy. Specifically, the proposed method uses an unsafe LLM to automatically create a pool harmful demonstrations, insert special tokens into the prompt, and optimizes the demonstrations using a demo-level random search. The empirical results confirm the efficacy of the proposed methods.\n\n1. the proposed method is simple and straightforward to implement.\n2. the dramatic sensitivity of FSJ to special tokens is surprising.\n3. the evaluation is comprehensive (many defenses are tested) and the results of the proposed method are strong.\n4. the paper is well-written and easy to follow.\n\n1. The evaluation is based on 50 harmful responses from AdvBench. The scale is limited. Besidse, AdvBench is also used to generate demonstration pool. Although the overlapped ones are inspected and removed, there may be a concern of overfitting. Using a different source of harmful responses like HarmBench [1] for evaluation may be better.\n2. The proposed method assumes that attackers have access to model-specific special tokens, which restricts its application scope. Without the help of inserting special tokens, the proposed method seems to be ineffective in breaking the well-aligned models like Llamas as shown in Tab. 1. It is therefore interesting to test if a special token can be determined without the knowlege of target model. \n3. Although the proposed method demonstrates the ability to circumvent a wide range of defenses, it may be ineffective when adaptive defenses were deployed. For example, toxic detectors can be used to detect if harmful content is included in the input prompt as demonstrations.\n\n[1] Mantas Mazeika et al., HarmBench: A Standardized Evaluation Framework for Automated Red Teaming and Robust Refusal.\n\nsee the Weakness above." }, { "confidence": 4, "rating": 3, "review_id": "m0QCrO4Atr", "review_text": "This paper proposes several ICL (in-context learning)-based techniques to improve the effectiveness and efficiency of jailbreaking prompts, including adding system special tokens and random search on the demonstrations.\n\n- The discovery that using special tokens can enhance the effectiveness of harmful demonstrations is interesting.\n- The experiments show the overall proposed method can notably improve the ASR on multiple LLMs.\n- The experiments also include evaluations of the attack against LLMs with defense techniques.\n\n- The main objective of this paper seems to be misleading. As indicated by the abstract and the story in the introduction, this paper attempts to address the problem of\n\n> it possible to use few-shot demonstrations to efficiently jailbreak LLMs?\n\nHowever, since ICA has already been proposed as the few-shot version of jailbreaking, this paper may take ICA as the main target, rather than refining MSJ.\n\n- Following the previous weakness, the most important baseline, ICA, is missed in the experiments. Moreover, what is the difference between the used baseline (FSJ) and ICA is not indicated.\n- The first improved technique, injecting special tokens, though interesting, is of limited scientific contribution. It’s more like an attack trick, rather than a substantial academic improvement. More importantly, why these tokens can enhance the ASR is not well-explained or understood.\n- The second technique is anyway lacking novelty since the jailbreaking literature has already used the intention of random search (e.g., GCG and AutoDAN) to improve the jailbreaking prompt.\n\nSee weaknesses." }, { "confidence": 4, "rating": 6, "review_id": "H8z8uwPv7s", "review_text": "This paper proposes jailbreak attacks via few-shot demonstrations. The authors introduce a three-step method to achieve this goal, which includes constructing a demo pool, injecting special tokens, and demo-level random search. The proposed method demonstrates strong attack performance against aligned LLMs and multiple defenses.\n\nThe proposed method is a strong attack that can bypass many advanced defenses.\n\nOverall, the paper is well done. However, I have a significant concern: How does the attacker know the special tokens used in the LLMs? This is particularly problematic for attacking closed-source models such as ChatGPT. I also noticed that the authors did not evaluate their method on closed-source models in this paper. This issue represents a critical weakness in practical jailbreak evaluations. I will raise my score to acceptance if this concern is addressed. Otherwise, I think this weakness is a flaw that we can not ignore.\n\nPlease refer to the weaknesses." } ]
zLU21oQjD5
DART-Math: Difficulty-Aware Rejection Tuning for Mathematical Problem-Solving
Solving mathematical problems requires advanced reasoning abilities and presents notable challenges for large language models. Previous works usually synthesize data from proprietary models to augment existing datasets, followed by instruction tuning to achieve top-tier results. However, our analysis of these datasets reveals severe biases towards easy queries, with frequent failures to generate any correct response for the most challenging queries. Hypothesizing that difficult queries are crucial to learning complex reasoning, we propose *Difficulty-Aware Rejection Tuning* (`DART`), a method that allocates difficult queries more trials during the synthesis phase, enabling more extensive training on difficult samples. Utilizing `DART`, we have created new datasets for mathematical problem-solving that focus more on difficult queries and are substantially smaller than previous ones. Remarkably, our synthesis process solely relies on a 7B-sized open-weight model, without reliance on the commonly used proprietary GPT-4. We fine-tune various base models on our datasets ranging from 7B to 70B in size, resulting in a series of strong models called `DART-Math`. In comprehensive in-domain and out-of-domain evaluation on 6 mathematical benchmarks, `DART-Math` outperforms vanilla rejection tuning significantly, being superior or comparable to previous arts, despite using much smaller datasets and no proprietary models. Furthermore, our results position our synthetic datasets as the most effective and cost-efficient publicly available resources for advancing mathematical problem-solving. Our datasets, models and code are publicly available at https://github.com/hkust-nlp/dart-math.
https://openreview.net/pdf/26d6bf8a231686aaa5faf9277e38c2b2d934ff28.pdf
[ { "confidence": 5, "rating": 4, "review_id": "uexL0jAiqC", "review_text": "This paper synthesizes a math reasoning dataset with a designed way of rejection sampling. Many base models show performance improvements on math reasoning tasks after instruction-tuning on this dataset. They promise to release the dataset and models.\n\nTheir curated dataset achieves relatively good instructing-tuning performance with least data amount compared to other baselines. The dataset will be released.\n\n1. The proposed sampling technique is trivial and incremental, when comparing with previous works, e.g., the uniform method is used in ToRA, and the prop2diff method is used in MARIO.\n2. There’s little improvement or even performance drop when tuning Mistral-7B and DeepSeekMath-7B \ncompared to other baselines. As mentioned in the analysis section, this dataset is somehow replaceable by math-specific continual pre-training + supervised fine-tuning (SFT).\n3. The major concern is that even the paper claims the proposed dataset is smaller, however, the LLM used to synthesize the smaller dataset is `DeepSeekMath-7B-RL`, which is trained on a larger SFT dataset. An alternative and reasonable response generation method should be leveraging the `DeepSeekMath-7B-Base` with proper prompting, as `DeepSeekMath-7B-Base` has not been supervised fine-tuned.\n\n1. What’s the query coverage ratio on MATH training set constructed by Prop2Diff?\n2. Any figure or statistics to show the difficulty distribution of your DART dataset? \n3. Any case studies to show the generated responses of your DART dataset? How do you extract answers in the raw response? Responded texts from the LLM are quite likely not to follow your instruction as you apply such a high temperature in sampling process. It’s not likely that simple pipelines, such as regular expressions can achieve this." }, { "confidence": 4, "rating": 4, "review_id": "NSaUVq88kk", "review_text": "The paper introduces Difficulty-Aware Rejection Tuning (DART), a novel approach for enhancing the mathematical problem-solving capabilities of large language models (LLMs). Traditional methods often produce datasets biased towards easier queries, limiting the models' ability to learn from challenging examples. DART addresses this by allocating more sampling trials to difficult queries during the data synthesis phase. The authors created two strategies, Uniform and Prop2Diff, to ensure a balanced representation of easy and difficult queries. Using only open-weight models, the authors generated new, smaller datasets that prioritize difficult queries.\n\n1. The DART method effectively addresses the bias towards easy queries in traditional rejection sampling, which is a significant contribution to the field.\n\n2. The paper provides a thorough analysis of the biases in existing datasets and clearly explains how DART mitigates these issues.\n\n3. The authors plan to make their datasets and models publicly available, contributing valuable resources to the research community.\n\n1. The success of DART relies heavily on the ability of models to generate correct responses for difficult queries, which may not always be feasible for extremely challenging problems.\n\n2. While the focus on difficult queries is commendable, the quality of the generated responses for these queries needs to be high to truly benefit the training process. The paper does not provide a detailed analysis of the quality of these responses.\n\n3. The approach's reliance on extensive sampling for difficult queries might pose scalability issues, particularly for very large datasets or models with limited computational resources.\n\n1. How to be aware of difficulty if not labelled\n2. How to choose $k_u$。\n3. The details of Prop2Diff is missing. How many samples were generated for each difficulty level? What is the equation for generating numbers?" }, { "confidence": 5, "rating": 5, "review_id": "IJoqm7jQlI", "review_text": "The paper proposes a rejection sampling pipeline for automatically generating SFT data, emphasizing that harder data requires more trials. The difficulty is heuristically determined using the ratio of incorrect trials for each question. Experiments demonstrate that this method can outperform traditional rejection methods on various math benchmarks.\n\n- The experiments are solid, showing significant improvements over traditional rejection methods.\n\n- The paper is clearly written and easy to follow.\n\nThe proposed Prop2Diff strategy lacks innovation. Assigning more budget to more complex questions in data synthesis is a common practice. For instance, in [1], which successfully annotated 83.1% of MATH questions, it is evident that harder problems were allocated more budget in rejection sampling. [1] also indicates that fewer and harder data can significantly and efficiently improve performance. The authors should discuss the differences between their approach and the one used in [1] more thoroughly.\n\n[1] ToRA: A Tool-Integrated Reasoning Agent for Mathematical Problem Solving\n\nCould you elaborate on how your approach differs from the rejection sampling strategy used in [1]?\n\n[1] ToRA: A Tool-Integrated Reasoning Agent for Mathematical Problem Solving" }, { "confidence": 4, "rating": 6, "review_id": "IKhTWGcONb", "review_text": "The paper presents an approach to improving the performance of LLMs in mathematical problem-solving. The authors identify that current datasets synthesized using proprietary models like GPT-4, are biased towards easier queries. To address this, they introduce Difficulty-Aware Rejection Tuning (DART), which allocates more trials to difficult queries during data synthesis. This method generates datasets focusing on difficult queries using an open-weight model, DeepSeekMath-7B-RL, without relying on proprietary models. The authors demonstrate that models fine-tuned on DART-Math datasets significantly those fine-tuned on traditional datasets across various mathematical benchmarks, and beat the best baseline by average of roughly 3-4%\n\n- Technically solid paper with state-of-the-art results.\n- Mostly well-presented and easy to understand.\n- Comprehensive experiments and analysis.\n- Decent impact in improving mathematical capabilities of LLMs, with the authors publicly releasing their dataset.\n- By using an open-weight model, DeepSeekMath-7B-RL, the authors eliminate dependency on proprietary models like GPT-4, making the approach more accessible.\n\n1. It is unclear how the hyperparameters of the baseline, VRT (vanilla rejection tuning), were tuned. For instance, as mentioned in Appendix A.2, sampling temperature is searched from 0.3 to 1.7 for DART. Was the same procedure used for VRT? Another caveat is the need for extensive hyperparameter tuning compared to baselines. Were similar extensive procedures for tuning performed for other baselines?\n2. It is unclear if the improved performance of the proposed method is due to difficulty or the topic of the problem. For instance, LEVEL 5 Math problems may have a higher number of geometry questions (or at least their fail rate is higher, resulting in fewer samples in VRT). An analysis of topic-wise performance comparing DART and baseline methods may clarify this.\n\n**Minor Weaknesses: **\n1. It is unclear how much advantage the method would provide in the case of other multi-iteration fine-tuning methods such as STaR and v-STaR. For instance, it is possible that after multiple iterations, VRT performs similarly to DART, since a higher number of samples will be collected from even the hard problems in second or further iterations.\n2. The data synthesis is only done using the DeepSeekMATH-7B model. It is unclear why this model was chosen. Previous methods using VRT-like methods typically use the same model for synthesis and generation. Thus, higher results in smaller models such as Llama-8B may partly be due to the use of stronger models' reasoning chains, making it similar to a distillation method.\n\n1. The authors use \"fail rate\" as a metric to measure difficulty. However, has any analysis been performed to measure how good an estimate it is of actual model accuracy?\n2. In line 138, \"we sample till correct responses for each query is proportional to its difficulty score,\" does it mean linearly proportional?\n3. To the best of my knowledge, previous works usually use lower temperatures in the range of 0.7-1. However, the authors found 1.6 to be effective. Do the authors have a comparison of results between using a more standard temperature (e.g., 0.7 or 1) compared to 1.6?\n4. In Line 253, the authors state: \"We hypothesize that this is due to these models’ extensive pretraining on mathematical content.\" Do the authors have more points to substantiate this? For instance, it could be partly due to a slightly weaker or similar model being used to generate synthetic data. Further, the hypothesis: \"This pretraining likely covers most skills that could be learned from the GSM8K and MATH training queries\" may not be correct, since, at least for Llama2-70B, the model capacity should not be a bottleneck to achieving higher scores on MATH (e.g., Qwen models). Can the authors provide a more detailed reasoning behind this hypothesis?" } ]
zLClygeRK8
Logarithmic Smoothing for Pessimistic Off-Policy Evaluation, Selection and Learning
This work investigates the offline formulation of the contextual bandit problem, where the goal is to leverage past interactions collected under a behavior policy to evaluate, select, and learn new, potentially better-performing, policies. Motivated by critical applications, we move beyond point estimators. Instead, we adopt the principle of _pessimism_ where we construct upper bounds that assess a policy's worst-case performance, enabling us to confidently select and learn improved policies. Precisely, we introduce novel, fully empirical concentration bounds for a broad class of importance weighting risk estimators. These bounds are general enough to cover most existing estimators and pave the way for the development of new ones. In particular, our pursuit of the tightest bound within this class motivates a novel estimator (LS), that _logarithmically smoothes_ large importance weights. The bound for LS is provably tighter than its competitors, and naturally results in improved policy selection and learning strategies. Extensive policy evaluation, selection, and learning experiments highlight the versatility and favorable performance of LS.
https://openreview.net/pdf/176852333e6e4a1430f1fe58cab4bcb648cf96b2.pdf
[ { "confidence": 2, "rating": 7, "review_id": "BhxU3IfIqu", "review_text": "The paper considers the offline contextual bandit problem. The authors consider a class of reward estimators for this setting that is a regularization of Inverse Propensity Scoring (IPS - aka importance sampling). A general concentration result is provided for this class of estimators. This is used to provide a tight result for an existing clipping IPS estimator and to construct a new Logarithmic Smoothing (LS) estimator. The resulting estimator is pessimistic by design, making it immediately applicable to the offline contextual bandit problem.\nThe authors use it to derive bounds for policy evaluation and selection and also for policy learning in the Bayesian setting. Experimental results also support the usefulness of the estimator.\n\nI am only broadly familiar with this line of research making it hard for me to properly contextualize its contributions.\n\n1. The proposed estimator is novel and has nice properties.\n2. As the name suggests, the estimator is smooth making it potentially easy to optimize.\n3. The application to contextual bandits is interesting.\n4. The experimental results are positive.\n5. The overall writing is good and clear.\n\n1. A more explicit comparison with existing concentration\\contextual bandit bounds is missing. The authors explain that their bound is better but this is somewhat vague, especially if the reader is not already an expert in this field.\n\n2. In line 155 the authors explain that their result can be derived from [1, Lemma 1.3]. Does this mean that the LS estimator has previously been suggested or only that an alternative proof technique exists for its concentration bound?\n\n3. Performance seems very close to that of IX\n\n4. The main body of the paper does not include any explanation of the techniques used. This can be a proof sketch for the concentration bound or a discussion comparing your approach to existing techniques. Can you provide such an explanation in your response?\n\n5. The notation U(pi) appears without definition in line 281. I assume it's defined in one of the references but should also be defined in this paper for completeness. (Please include an explanation in your response)\n\nTypo:\nline 98: one of the brackets is reversed in the definition of h\n\nSee above." }, { "confidence": 5, "rating": 3, "review_id": "XF5XfQZfAG", "review_text": "The authors propose empirical concentration inequalities for off-policy evaluation that apply to several forms of (smoothed) IPS, which are claimed to be tighter than the results in existing works. These bounds are then used to derive policy learning guarantees that inherit the properties of the concentration inequalities.\n\nI appreciate that the authors have applied their method to OPE, OPS, OPL, and also provided some experiments. \n\nI did not read the appendix nor check the correctness of the analysis in detail, but from a quick glance it appears that the authors were careful to provide rigorous and well-organized proofs.\n\nMy biggest criticism is that the authors have not justified *in the main body* their claim that \"LS is provably tighter than its competitors\" (L12) for any of the results, including the concentration inequalities (Prop 1, Cor 3, Cor 4) and the policy learning guarantee (e.g., Prop 6). \n\nSince these claims are the whole premise for the paper, their justification should be a central pursuit and only stating \"x is in Appendix y\" (L147, 178, 195) is hugely insufficient. \n\nFor example, I would have liked to see a discussion on (possibly even in graphs): \n- For the choices of $h$ described in (4), when do the bounds in Prop 1, Cor 3, and Cor 4 improve over the bounds from their respective papers? \n- Is $h*$ (the tightest choice) better than all of the above? \n- Does this hold for all hyperparameter choices, e.g. $\\lambda$ and $L$?\n- How does the computational complexity of calculating the bounds in Prop 1, Cor 3, Cor 4 hold up relative to their competitors? \n- Exactly how does this lead to downstream policy learning improvements?\n\nLastly, I found the overall technical presentation to be relatively poor, and I'll give a few examples: \n- The condition (C1) from Section 2 (\"Regularized IPS\") that all results depend on is never explicitly defined, and it should be an assumption that is called in every proceeding proposition/theorem statement. \n- Shouldn't (11) be framed in, e.g., a lemma environment? \n- The term \"pessimism\" is overloaded, e.g., for \"high-probability upper bounds\" in L111 but also for an in-expectation variant in Eq. (5), which is slightly unusual (and I'm pretty sure not the way it's used in [26]) but not recalled again in the main body so I'm not sure what it's for (perhaps the proof of Prop 1).\n\nIn addition to the ones in \"Weaknesses,\" I have a specific question about Proposition 6. The gold standard in offline policy selection is a bound in the form of $R(\\pi) - R(\\widehat\\pi) \\le \\lambda S(\\pi) + \\varepsilon$ for any comparator policy $\\pi$ rather than the optimal one $\\pi^*$ (see [26] and [Wang 2024] and [Xie 2021]). The former is strictly more general -- can you write your bound in such a form?\n\n\n**References**\n\nWang, L., Krishnamurthy, A., & Slivkins, A. (2024, April). Oracle-efficient pessimism: Offline policy optimization in contextual bandits. In International Conference on Artificial Intelligence and Statistics (pp. 766-774). PMLR.\n\nXie, T., Cheng, C. A., Jiang, N., Mineiro, P., & Agarwal, A. (2021). Bellman-consistent pessimism for offline reinforcement learning. Advances in neural information processing systems, 34, 6683-6694." }, { "confidence": 3, "rating": 6, "review_id": "UKM8OYEx31", "review_text": "This paper studies log-algorithmic smoothing of importance weight for off-policy learning. The proposed smoothing technique can be seen as a differentiable variant of clipping, which is useful for variance reduction for OPL. The paper also analyzes the PAC-Bayes learning bound of the proposed OPL method, characterized by the KL divergence with the logging policy, showing that the proposed method achieves a tighter bound than baselines, including simple clipping. The experiment also shows that the proposed method has tighter bounds than baselines and enables more accurate off-policy selection.\n\n- **Reasonable formulation based on theoretical analysis**: The proposed method is derived from a tight upper bound of the policy's risk. Also, the proposed method has an interpretation as soft, differentiable clipping. The technique is well-motivated and is reasonable to interpret.\n\n- **PAC-Bayes learning bound**: A sub-optimality form is derived, and it is also easy to interpret as a pessimistic approach, which should be acknowledged.\n\n- **Experiments on various tasks**: The paper evaluates the proposed approach in upper bound derivation, off-policy selection, and off-policy learning. The experiment results show the wide applicability of the proposed method in many OPE/OPL-related tasks.\n\n- **Connection to Metelli et al. 2021 is not clear**: Metelli et al. 2021 also considers the importance of weight differential and shows that the proposed method achieves a Subgaussian rate. Similar to the reviewed paper, Metelli et al. 2021 also have a KL divergence term in the theoretical analysis. While the proposed method adequately differs from Metelli et al. 2021, and the paper does cite it, the paper does not mention Metelli et al. 2021 in the related work in detail. Since the motivation and contributions are similar, a detailed discussion on the advantages and the differences would be appreciated.\n\n- **Baselines in the experiments**: As mentioned above, Metelli et al. 2021 propose a similar idea that can be used as a baseline in experiments. Comparing with advanced regularization techniques such as shrinkage (Su et al. 2020) would also be informative.\n\n(Metelli et al. 2021) Subgaussian and Differentiable Importance Sampling for Off-Policy Evaluation and Learning. Alberto Maria Metelli, Alessio Russo, Marcello Restelli. NeurIPS, 2021.\n\n(Su et al. 2020) Doubly robust off-policy evaluation with shrinkage. Yi Su, Maria Dimakopoulou, Akshay Krishnamurthy, Miroslav Dudík. ICML, 2020.\n\n- What are the connections with Metelli et al. 2021? (See weaknesses for the detailed comments.)\n\n- How does OPL work with the varying performance of the behavior policy? In my understanding, the policy will be pessimistic in out-of-distribution, but seeing how it works in experiments would be informative for readers." }, { "confidence": 5, "rating": 10, "review_id": "W3rUllaga2", "review_text": "Policy evaluation, selection and optimization are considered in the context of offline contextual bandits, where i.i.d. data with a known behavior policy is given. The authors set out to study a generalization of importance weighted policy evaluation; for this they start from a general formulation that computes a value for all data observations, which are then averaged. The free \"parameter\" here is $h$, the function that assigns a value given an observation (of a context, associated action, and cost). A tight, general, high probability upper bound on the expected cost of a fixed target policy is derived first. Specific choices for the map $h$ are then derived based on minimizing this upper bound. Two practical solutions to this optimization problem are studied in more details: Global clipping and \"logarithmic smoothing\". Results are then derived for both policy selection and optimization.\n\nNovel ideas, novel results, good empirical results.\n\nDespite saying that the methodology of paper [31] is adopted, this is only partially done. Why deviate from the evaluation in [31]? I expected an explanation of this.\n\nCan you explain why you did not follow the protocol and reported values of [31]?" } ]
zLBlin2zvW
Improving Sparse Decomposition of Language Model Activations with Gated Sparse Autoencoders
Recent work has found that sparse autoencoders (SAEs) are an effective technique for unsupervised discovery of interpretable features in language models' (LMs) activations, by finding sparse, linear reconstructions of those activations. We introduce the Gated Sparse Autoencoder (Gated SAE), which achieves a Pareto improvement over training with prevailing methods. In SAEs, the L1 penalty used to encourage sparsity introduces many undesirable biases, such as shrinkage -- systematic underestimation of feature activations. The key insight of Gated SAEs is to separate the functionality of (a) determining which directions to use and (b) estimating the magnitudes of those directions: this enables us to apply the L1 penalty only to the former, limiting the scope of undesirable side effects. Through training SAEs on LMs of up to 7B parameters we find that, in typical hyper-parameter ranges, Gated SAEs solve shrinkage, are similarly interpretable, and require half as many firing features to achieve comparable reconstruction fidelity.
https://openreview.net/pdf/43584951381c20709a2cb6cf3ebc6ae1b2d501df.pdf
[ { "confidence": 3, "rating": 6, "review_id": "aywyjC2JCN", "review_text": "This work proposed a Gated Sparse Autoencoder (Gated SAE) to mitigate the standard SAEs' biases, such as shrinkage, which systematically underestimate the feature activations from SAEs. The key difference between Gated SAE and SAE is that the Gated SAE separates affine transformations within the encoder in order to decide which dictionary elements to use in a reconstruction loss, and estimate the coefficients of active elements, although with the 50% more computing required to achieve. Comprehensive experiments are conducted to compare and verify how good the Gated SAE is to standard SAE, including a blinded human study to rate and compare the interpretability of randomly sampled Gated and baseline SAE features.\n\n- A new architecture of SAE inspired by GRU is proposed to include a gate mechanism to mitigate shrinkage bias\n- Comprehensive quantitative experiments including ablation studies to evaluate the proposed Gated SAE compared to SAEs\n- A human evaluation to rate randomly sampled features from Gated SAE and SAE\n\n- It is not very straightforward to understand how well the features from Gated SAE are compared to SAE based on Figure 4. Some case studies based on the open-source SAE visualizer library [1] are required to help better understand this.\n- It will be better to see more case studies on downstream tasks to compare Gated SAE and SAE, e.g., automatic circuit detection [2]\n\n[1] C. McDougall. SAE Visualizer, 2024. https://github.com/callummcdougall/sae_vis\n\n[2] Huben, Robert, et al. \"Sparse Autoencoders Find Highly Interpretable Features in Language Models.\" The Twelfth International Conference on Learning Representations. 2023.\n\n- As mentioned in the weakness above, the interpretability analysis was conducted via human rating for randomly sampled feature visualizations. However, those visualizations are not included in the main body or appendix. It will be better to help understand how well the sampled features are when comparing Gated SAE and SAE.\n- Although the Gated SAE has good performance on the loss recovered (fidelity) and relative reconstruction bias, it is still not clear whether features from Gated SAE are better than SAE under downstream tasks. It will make this work very solid if some analysis of mini downstream applications can be conducted, e.g., IOI task, greater-than task, etc." }, { "confidence": 5, "rating": 7, "review_id": "bBjGyNUCZK", "review_text": "The paper attempts to resolve the issue of feature shrinkage in sparse autoencoders (SAEs) by replacing the SAE ReLU activation function with a gated ReLU unit. The weight-tying scheme they use for the gated unit effectively turns it into a jump ReLU activation function.\nThey train gated SAEs and baseline SAEs on a one layer transformer, Pyhtia-2.8B and Gemma-7B. They find that gated SAEs eliminate systematic shrinkage, and consistently outperform baseline SAEs on the pareto-curve of sparsity, measured by the L0 pseudonorm, and faithfulness, measured by the model loss recovered relative to a zero ablation baseline. \nThey run various additional tests involving variations of the gated and baseline SAE architectures, including combinations of SAE dictionaries with the classic gradient pursuit algorithm for choosing sparse feature coefficients at inference time. They conclude that the Pareto improvement of their gated SAEs over their baseline SAEs is due in part to better feature dictionaries, in addition to better estimated feature coefficients.\nThey compare the subjective interpretability of 150 gated SAE and baseline SAE features in Pythia-2.8B and 192 features in Gemma-7B, using a blinded analysis of activating dataset examples. They find that the features were similarly interpretable.\n\nThe paper attempts to address a substantive practical problem with current SAE training methods.\n\n\nThe paper's proposed new architecture is evaluated extensively, and many detailed additional investigations on the individual effects of various parts of the gated SAE architecture are described in sections 5.1, 5.2 and Appendix D. \n\n\nI find Appendix D interesting in its own right, since it shows quantitative comparisons between SAE methods and the classic gradient pursuit optimization algorithm, as well as mixing SAE feature dictionaries with gradient pursuit for sparse approximation of feature coefficients. I have not encountered such a comparison before. \n\n\nFor the most part, good documentation of all their process is provided, and the writing and presentation are very clear in general.\n\nThe paper does not really address the concern that gated SAEs may outperform baseline SAEs in part by implicitly widening the definition of what it means for ‘features’ to be represented in the model. As the paper itself notes in Appendix D, though other more powerful sparse coding algorithms greatly outperform SAEs in terms of reconstruction and sparsity, there are concerns that the greater expressivity of these techniques lets them find spurious ‘features’ that would not be accessible to the model’s own internal computations. An SAE can only find features that are represented in the sense that their current values can be read off with a single ReLU probe, while an inference time algorithm or a multi-layer probe may read off ‘feature’ values that the model itself could not possibly access using a single MLP layer. A gated ReLU is far less expressive than an algorithm like gradient pursuit, but more expressive than a ReLU. So to what extent do gated SAEs outperform baseline SAEs merely because they are implicitly working with a more relaxed definition of what it means for a feature to be represented in the model? Figure 6 in Appendix D incidentally investigates this somewhat, since it attempts to compare the quality of gated vs. baseline dictionaries independent of their coefficients. However, the results there seem inconsistent, with smaller performance gaps and baseline SAEs outperforming gated SAEs at higher L0. I think this issue of the representational power of the probe used is pretty central for contextualizing the results, and should at least have been discussed.\n\nThroughout the paper, the authors present reconstruction scores for SAEs in terms of the fraction of model loss recovered compared to a zero-ablation baseline. I think this metric obscures vital information. Lowering CE loss from e.g. 4.5 to 4.0 is typically much easier than lowering it from 1.5 to 1.0. Thus, the same difference in loss recovered between two SAEs can correspond to very different gaps in SAE quality. Without the raw CE scores, there is no direct way to infer how large the gap is quantitatively. At minimum, these raw CE scores should be in the supplementals. Better yet, the recovered performance could additionally be reported in terms of the compute required to train a model with the same CE score, as suggested in https://arxiv.org/abs/2406.04093.\n\nWhy are the raw CE loss recovered scores not in the paper? Since it is typically much harder to lower CE loss from e.g. 4.5 to 4.0 than from 1.5 to 1.0, it is difficult to evaluate the quality gap between baseline and gated SAEs, or the quality of the baseline SAEs, without these scores." }, { "confidence": 4, "rating": 6, "review_id": "j3yUgLi8AN", "review_text": "This work introduces a new technique under mechanistic interpretability's sparse autoencoders. By using a less naive SAE, with a gating mechanism and a little extra computation, the paper shows a decent improvement over the baseline.\n\nThis work addresses the important issue of interpreting transformer-based LLMs and clearly demonstrates an interesting method. The mechanistic interpretability community will certainly find this work of interest.\n\nThe writing is well written and fairly easy to follow, the results are clearly presented, and all relevant aspects of the method are appropriately ablated\n\nI liked the setup of the internal user study; I think future papers will follow the design of the study closely.\n\nThe work's cited throughout the manuscript are incredibly thorough.\n\nWhile I generally like the paper, I have two primary concerns:\n\n* The architecture and loss are somewhat difficult to understand. I did appreciate the pseudo-code in the appendix, but I feel for readers not familiar with SAEs may have a hard time, especially with the optimization-based design choices of weight tying. Perhaps explaining weight tying later in 3.2 would help. I would especially prefer if a few lines of pseudo code could be added in the main paper, next to figure two.\n* The user study results. I don't mind the small change in means between the method and the baseline, but the explainable AI community has been around for a long time and the shift from studies with a few experts to larger cohorts has been the norm for a while now. Just because there's a rebranding to mechanistic interpretability doesn't mean this field should settle for underpowered studies. Nevertheless, I do find the study setup itself to be well articulated and a very useful starting point for future work in this area.\n\nMinor:\nSome of the design choices (weight-tying, no r_mag, etc) aren't well explained until the ablation where we find they are primarily for optimization. This could be motivated a little earlier, i.e. that the pareto improvement comes from the separation, and not those choices.\n\nNone" }, { "confidence": 3, "rating": 7, "review_id": "TfWYoJfNTH", "review_text": "This paper introduces Gated Sparse Autoencoders (Gated SAEs), an improvement over standard sparse autoencoders (SAEs) for decomposing language model activations. The key idea is to separate the tasks of detecting which features are active and estimating their magnitudes, allowing the sparsity penalty to be applied only to feature detection. Through experiments on language models up to 7B parameters, the authors show that Gated SAEs achieve better reconstruction fidelity for a given level of sparsity compared to baseline SAEs, while resolving issues like shrinkage. A human evaluation study finds Gated SAE features to be comparably interpretable to baseline features.\n\n- A well-motivated architectural modification to SAEs that addresses key limitations\n- Comprehensive empirical evaluation across multiple model sizes and activation sites demonstrating clear improvements over baseline SAEs\n- Careful ablation studies and analysis to understand the source of improvements\n- Human evaluation study to assess interpretability of learned features\n- Thorough discussion of limitations and future work directions\n\n- The presentation could be improved in some areas, particularly in explaining some of the technical details and metrics\n- Some of the figures are quite dense and could be made more readable\n- The human evaluation study, while valuable, has a relatively small sample size\n\n- Do you have any insights on how Gated SAEs might scale to even larger language models? Are there any potential limitations as model size increases?\n- Have you explored using Gated SAEs for any downstream mechanistic interpretability tasks beyond the basic reconstruction and interpretability metrics? For example, does the improved reconstruction enable better circuit analysis?\n- The weight tying scheme seems important for computational efficiency. Have you explored any alternative tying schemes? Is there a theoretical justification for why this particular scheme works well?" } ]
zJremsKVyh
Marginal Causal Flows for Validation and Inference
Investigating the marginal causal effect of an intervention on an outcome from complex data remains challenging due to the inflexibility of employed models and the lack of complexity in causal benchmark datasets, which often fail to reproduce intricate real-world data patterns. In this paper we introduce Frugal Flows, a likelihood-based machine learning model that uses normalising flows to flexibly learn the data-generating process, while also directly targeting the marginal causal quantities inferred from observational data. We provide a novel algorithm for fitting a model to observational data with a parametrically specified causal distribution, and propose that these models are exceptionally well suited for synthetic data generation to validate causal methods. Unlike existing data generation methods, Frugal Flows generate synthetic data that closely resembles the empirical dataset, while also automatically and exactly satisfying a user-defined average treatment effect. To our knowledge, Frugal Flows are the first generative model to both learn flexible data representations and also \textit{exactly} parameterise quantities such as the average treatment effect and the degree of unobserved confounding. We demonstrate the above with experiments on both simulated and real-world datasets.
https://openreview.net/pdf/5ca85bc9b90e258067e112db30bfa5eae96a4a2a.pdf
[ { "confidence": 3, "rating": 7, "review_id": "wSUiyYafZQ", "review_text": "This paper introduces _Frugal Flows_ a method that learns the data distribution of data for causal effect estimation; namely outcome $Y$, binary treatment $X$ and pretreatment covariates $\\mathbf{Z}$. \nThrough a combination of frugal parametrisation, normalizing flows and copulas, separate components for the marginal causal effect $p_{Y| do(X)}$, the probability integral transforms of $\\mathbf{Z}$ and the propensity score are leaned.\n(The components of) the learned model, can be used for (i) estimating the marginal effect and (ii) to generate synthetic data with a fixed marginal effect for benchmarking other causal inference methods.\nIn the second application the component for the marginal effect is switched out for another with desired properties.\n(i) is demonstrated on small synthetic datasets.\n(ii) is demonstrated by fitting FFs to two real-world datasets and generating synthetic data with adjusted properties.\n\n- The paper is well-written.\n- It tackles an important problem in causality research. Since, randomized data is hard and expensive to get, many causal methods are only evaluated on synthetic data and generating realistic/semi-synthetic data is hard. This paper makes a great contribution towards improving synthetic data generation. If the code for the method is provided in a user-friendly manner, I could see this having a big impact on the causality community.\n\n- Normalizing Flows have been used in the causal modelling context before (see [1, 2]). While prior works solve different problems (the inferred latents correspond to exogenous variables of an SCM, not directly applied to causal effect estimation), I think it would still be valuable to contrast this work to what has been done before for future reference in the literature.\n- L59: The basic causal assumptions aren't explicitly stated. What are the causal assumptions on $X$, $Y$ and $\\mathbf{Z}$? It seems like the method wouldn't hold if $\\mathbf{Z}$ was a mediator (I suppose the equation after L60 wouldn't hold). A reference to a 500+ page book is given for the assumptions, which feels like a slap in the face for the reader.\n- The notation for interventional distributions is confusing: what's the difference between using an asterisk and explicitly using the do-notation? In the equation after L60, the LHS seems to be an interventional quantitiy (asterisk, but no do-notation), whereas Equation (1) has the do-notation, but no asterisk. Do the two notation elements mean different things?\n- I think this paper would greatly benefit from a visual abstract showing how the different flows and distributions come together. Maybe this is something that could be added for the camera-ready.\n\n\nMinor:\n\n- L201: typo\n\n[1] Javaloy et al. \"Causal normalizing flows: from theory to practice.\" NeurIPS 2023\n\n[2] Wendong et al. \"Causal component analysis\" NeurIPS 2023\n\n- Fig. 1: What's the meaning of the red undirected edges. Does it mean they could go either way and/or they could be confounded? Please specify this somewhere in the paper or appepndix.\n- Def. 1, App. A: I struggle to understand this definition. The cartesian product on the RHS makes sense to me, but what does the LHS of the equation mean? What is the \"x\" operation between functions? Do the two functions map to the same space?\n- Fig. 2: Why is the first line needed? If I understand correctly, this just makes all pretreatment variables uniform. Couldn't you put $\\mathbf{Z}$ directly in the second line?\n- You show synthetic data generation based on two datasets in Sec. 4.2. Why couldn't you use the same datasets to test causal effect estimation in Sec. 4.1?\n- How many datapoints are in the Lalonde dataset?\n- In training, how did you check whether the training has succeeded? I suppose you minimize the log-likelihood, how did you define \"good enough\"? I'm asking because the training seems pretty fast (App. D2.4). What's the total number of parameters for each of the datasets?" }, { "confidence": 4, "rating": 7, "review_id": "dRPnh1a0br", "review_text": "The paper introduces a generative modeling approach called Frugal Flows, designed to learn the data generation process with an explicit parametrization for the marginal causal effect of treatment on outcomes. Inspired by the frugal parametrization of marginal structural models, this approach models the marginal intervention distribution $p(Y|do(X))$ directly, rather than the joint distribution $p(Y|Z, do(X))$. This helps in preserving any constraints on the average treatment effect while flexibly modeling the data generation process. Frugal Flows employs copula flows to parameterize the model, accommodating constraints on the average causal effect and handling unobserved confounding during data generation. The authors validate the proposed method through experiments on both synthetic and real-world datasets, demonstrating its ability to generate realistic datasets with user-specified constraints.\n\n* The paper's approach to validating causal models using simulated datasets is indeed impactful and relevant. It addresses a significant gap by allowing for general constraints on quantities of interest, such as average causal effect and unobserved confounding, during data generation. This capability is crucial because many prior generative modeling approaches for causal datasets either do not offer such flexibility or cannot ensure the preservation of these constraints, thus making this work a notable advancement in the field.\n\n* The paper is well-written, with clear explanations in the background sections on frugal parametrization and flows, which help the reader grasp the proposed approach. The details of the approach are well-articulated, and the experimental results are presented effectively.\n\n* The proposed approach is indeed novel. While it builds on established concepts like frugal parametrization, the specific application of normalizing flows for parametrization and its focus on average causal effect estimation represent a significant and innovative contribution.\n\nMy main concern with the work is the limited empirical validation of the proposed approach. Given that the primary contribution is the learning methodology rather than theoretical analysis, I would expect a more extensive set of experiments to validate its effectiveness. For example, prior research on generative modeling for causal inference, such as the work by [1], includes comprehensive experiments with various statistical tests to assess the realism of generated samples and a broader benchmarking of causal estimators. This paper would benefit from similar depth in its empirical evaluation.\n\nIt would nice if the authors can conduct similar experiments to asses whether learned generative model generates realistic samples and evaluate it on more datasets. Also, the authors should compare with the prior works [1, 2] as baselines to establish which approach is the best at capturing the underlying data generation process, and empirically validate their claim (Section 2.6) that the proposed approach would be better than prior works at capturing used-specified constraints on the average causal effect.\n\nReferences\n\n[1] Neal, Brady, Chin-Wei Huang, and Sunand Raghupathi. \"Realcause: Realistic causal inference benchmarking.\" arXiv preprint arXiv:2011.15007 (2020).\n\n[2] Harsh Parikh, Carlos Varjao, Louise Xu, and Eric Tchetgen Tchetgen. Validating causal inference methods. In International conference on machine learning, pages 17346–17358. PMLR, 2022.\n\n* A suggestion for the notation is that the authors could use $T$ instead of $X$ to denote the treatment variables in the paper. This way the notation would be less confusing, as $X$ represents a general random variable in Section 2.5 and Section 2.6\n\n* Maybe there is a typo in Figure 2? The top row should be $\\mathcal{F_{Z_i}}^{-1}(.)$ as we transforming the covariate $\\{ Z_i \\}$ to the correlated uniform variables $\\{ V_{i} \\}$\n\n* I don't understand the Section 3.1.1 on copula flow for $X$ on $Z$. Why don't we directly model $p(X|Z)$ using a normalizing flow and why do we need to parametrize using copula flow as $p(X|Z)= p(X).c(X|Z)$?" }, { "confidence": 3, "rating": 6, "review_id": "EkGmnslyYW", "review_text": "This paper proposed a generative model called Frugal Flows making use of copula flows to infer about marginal causal effects by simulating the data generating process.\n\n- The problem of inferencing marginal causal effects is an interesting and important problem\n- The idea of using generative models to estimate the marginal effects in the paper is interesting\n\nSee questions.\n\nI have two question --\n\n1. A highly related (and I suspect might be viewed as a \"dual\" appproach to your Frugal Flow) is the statistical matching (e.g., bipartite matching) to estimate the average treatment effect. It would be very informative to compare this as one of the baselines in your benchmarking and validation, as this also shed lights on how these two different schools of causal inference may (or may not) converge on ATE estimation.\n\n2. I think it is good to use real data for benchmarking/validation (but perhaps benchmarking is a bit strong here since only two datasets were used), but usually it is unclear how to interpret the results since the ground truth is unknown. Can you design and run some controlled synthetic experiment to verify the model?" }, { "confidence": 3, "rating": 7, "review_id": "MNRE6lf7KL", "review_text": "This work proposes to leverage existing neural density estimators (specifically, normalizing flows) to exploit a newly-proposed \"frugal parametrization\" that can capture the causal marginal distribution of an underlying causal model. Under this parametrization, the authors show how to specify and train each component of the model, and thus train the proposed Frugal-Flows to match the observational distribution as closely as possible, while being able to tune the marginal causal effect present in the generated data.\n\nThis way, frugal flows can be used to generate synthetic causal benchmarks that closely represent the _observational_ data while having more difficult-to-estimate causal effects, putting existing approaches to the test.\n\n- **S1.** The proposed frugal flows provide a way of generating new datasets that can be challenging from a causal-inference point of view, which I believe _important_ to test new and existing methods.\n- **S2.** The construction of the proposed architecture is quite rich in details.\n- **S3.** I find the frugal parametrization conceptually quite interesting.\n- **S4.** The authors motivate different scenarios for frugal flows in Sec. 3.2, as well as empirically show positive results on some synthetic and real-world scenarios.\n\n- **W1.** I find the frugal parametrization to be extremely under-explained, relying too much on the reader having full knowledge of the referenced work. Similarly, there is little to no explanation/intuition on why the frugal parametrization would properly capture the marginal causal distributions.\n- **W2.** The lack of explanations also applies to other concepts, e.g., \"conditional ignorability\" (line 39) \"variation independence\" (line 82, and I know the definition is later in App. A), or why copula-based flows would target conditional causal effects instead of marginal causal ones (line 182). (similar with lines 221 and 229)\n- **W3.** There are no mention to related works that propose similar ways of constructing causal benchmarks. From a 1-min search in google scholar, I already found some likely relevant works: [Work 1](https://arxiv.org/abs/2406.08311), [Work 2](https://arxiv.org/abs/2011.15007).\n- **W4.** I find the experiments a bit underwhelming, specially those from Section 4.1. The authors should at least show how is the fitting of the observational likelihood, and if they want to show the capabilities of frugal flows for causal inference (and not only causal-benchmark generation), they should compare with other methods like [Causal Normalizing Flows](https://arxiv.org/abs/2306.05415).\n\n- **Q1.** I am not sure that I understand what does the dotted red line represent in the boxplots.\n- **Q2.** Doesn't the statement in lines 291-294 directly contradict what you say later in lines 296-297?\n- **Q3.** Why is Figure 1 placed there?" } ]
zJNSbgl4UA
Slicing Vision Transformer for Flexible Inference
Vision Transformers (ViT) is known for its scalability. In this work, we target to scale down a ViT to fit in an environment with dynamic-changing resource constraints. We observe that smaller ViTs are intrinsically the sub-networks of a larger ViT with different widths. Thus, we propose a general framework, named Scala, to enable a single network to represent multiple smaller ViTs with flexible inference capability, which aligns with the inherent design of ViT to vary from widths. Concretely, Scala activates several subnets during training, introduces Isolated Activation to disentangle the smallest sub-network from other subnets, and leverages Scale Coordination to ensure each sub-network receives simplified, steady, and accurate learning objectives. Comprehensive empirical validations on different tasks demonstrate that with only one-shot training, Scala learns slimmable representation without modifying the original ViT structure and matches the performance of Separate Training. Compared with the prior art, Scala achieves an average improvement of 1.6% on ImageNet-1K with fewer parameters.
https://openreview.net/pdf/d586f3321f7b5f7435024391bab2f98aeaac3132.pdf
[ { "confidence": 5, "rating": 4, "review_id": "QhLxKgd6z1", "review_text": "The paper targets scaling down Vision Transformers (ViT) to fit environments with dynamically changing resource constraints. The authors propose Scala, a framework enabling a single network to represent multiple smaller ViTs with flexible inference capability by activating various subnets during training. Scala introduces Isolated Activation to disentangle the smallest sub-network and uses Scale Coordination to provide stable and accurate learning objectives. Empirical validations on different tasks show that Scala achieves scalable representation with one-shot training, matching the performance of Separate Training without modifying the original ViT structure. Scala demonstrates an average improvement of 1.6% on ImageNet-1K compared to previous methods, using fewer parameters.\n\n1. The problem is important in practice.\n2. The experimental results seem decent.\n\n1. My major concern is that, the same aim of adapting ViTs to dynamically changing resource constraints, can also be achieved by multi-exit networks, e.g., [*1, *2, *3]. However, the paper does not discuss these highly relevant works or compare with them. Hence, I vote for rejection.\n2. The method seems to lack novelty. 'smaller ViTs are intrinsically the sub-networks of a larger ViT with different widths' is not a surprising observation. The key techniques (e.g., Isolated Activation and Knowledge Distillation) are not new (naive or have been widely adopted).\n\n\n[*1] Huang, Gao, et al. \"Multi-Scale Dense Networks for Resource Efficient Image Classification.\" International Conference on Learning Representations. 2018.\n\n[*2] Wang, Yulin, et al. \"Not all images are worth 16x16 words: Dynamic transformers for efficient image recognition.\" Advances in neural information processing systems 34 (2021): 11960-11973.\n\n[*3] Han, Yizeng, et al. \"Dynamic perceiver for efficient visual recognition.\" Proceedings of the IEEE/CVF International Conference on Computer Vision. 2023.\n\nPlease refer to Weaknesses." }, { "confidence": 3, "rating": 7, "review_id": "JB0FjHE479", "review_text": "The paper presents Scala, a novel framework for scalable representation learning developed from US-Net. It identifies the issues of directly applying US-Net to ViTs and proposes solutions including Isolated Activation, Scale Coordination, and Stable Sampling. These innovations enable Scala to output several sub-networks in one-shot learning. Extensive experiments on various network architectures and datasets demonstrate that the sub-networks produced by Scala consistently outperform those generated by separate training, with significantly reduced training time.\n\nOriginality: Scala addresses the limitations of US-Net and successfully applies the concept of scaling to ViT backbones. This is a significant step in the adaptation of scaling methods for more complex network architectures.\n\nQuality: The paper supports its claims with extensive experimental results, providing strong evidence for the effectiveness of Scala.\n\nClarity: The paper is clearly written and well-organized, making it accessible and easy to follow.\n\nSignificance: Scala has the potential to influence future research directions in scaling ViTs.\n\nOriginality: The novelty of Scala is somewhat constrained. For instance, Noise Calibration does not show a distinct difference from standard knowledge distillation. Essentially, Scala integrates US-Net with an alternative activation for the smallest subnet and fixed scaling ratios.\n\nQuality: The authors might consider emphasizing results from a more standard 300-epoch ViT training schedule to align with common practices in the field.\n\nClarity: No further issues.\n\nSignificance: The challenge of scaling ViTs with arbitrary ratios remains unresolved.\n\n1. Regarding the issue shown in Fig. 4, is it always the smallest subnet that causes the issue, or does it occur with subnets having a scaling ratio near 0.25?\n\n2. Can the authors clarify any differences between Noise Calibration and standard knowledge distillation?\n\n3. What would be the impact if the distillation part were discarded and only Cross-Entropy (CE) loss were used for the initial epochs?" }, { "confidence": 3, "rating": 5, "review_id": "smfMfLLH1q", "review_text": "The paper introduces Scala, a novel framework designed to effectively scale down Vision Transformers (ViTs) for use in environments with fluctuating resource constraints. The key insight is that smaller ViTs can function as sub-networks within a larger ViT, differing mainly in width. Scala enables a singular network architecture that can emulate multiple smaller ViTs, thereby offering versatile inference capabilities while maintaining the structural principles of ViTs. The framework uniquely incorporates multiple sub-networks during its training phase, utilizes Isolated Activation to differentiate the smallest sub-network, and implements Scale Coordination to streamline the learning objectives for each sub-network, aiming for simplicity, stability, and accuracy. The empirical results across various tasks confirm that Scala can learn scalable representations efficiently with a single training iteration, maintaining the integrity of the original ViT architecture and achieving performance on par with networks trained separately.\n\nThe proposed Scala framework aims to enhance Vision Transformers (ViTs) by enabling them to learn scalable representations suitable for flexible inference. This is achieved through two key innovations: Isolated Activation, which effectively disentangles the representation of the smallest subnet to maintain clarity and specificity, and Scale Coordination, which ensures that each subnet within the larger network receives simplified, consistent, and accurate signals. These mechanisms are designed to optimize the performance and scalability of ViTs, addressing common challenges in adapting these architectures to varied and dynamic operational contexts.\n\n1. Recent papers[1,2,3] with \"Scalable\" usually scale ViT to billion size with large scale datasets like DFN, JFT, and Datacomp. Therefore, I suggest authors should reconsider if the experiments can support \"Scalable\".\n\n\n[1] Zhai, X., Kolesnikov, A., Houlsby, N., & Beyer, L. (2022). Scaling vision transformers. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 12104-12113).\n\n[2] El-Nouby, A., Klein, M., Zhai, S., Bautista, M. A., Toshev, A., Shankar, V., ... & Joulin, A. (2024). Scalable pre-training of large autoregressive image models. arXiv preprint arXiv:2401.08541.\n\n[3] Dehghani, M., Djolonga, J., Mustafa, B., Padlewski, P., Heek, J., Gilmer, J., ... & Houlsby, N. (2023, July). Scaling vision transformers to 22 billion parameters. In International Conference on Machine Learning (pp. 7480-7512). PMLR.\n\nplease refer the weakness." }, { "confidence": 3, "rating": 6, "review_id": "2lgIFFBLKe", "review_text": "This paper advances an approach for training Vision Transfomers (ViTs) such that at inference time they can be dynamically adjusted to fit different budget constraints with reduced drops of performance. To this end, the authors introduce Scala, a framework that allows a single network to encapsulate and train simultaneously multiple sub-networks of different capacities and widths. The methodological backbone of this work are the Universally slimmable networks (US-Net) [37], originally devised for CNNs. The authors identify and analyze a few flaws of US-Nets: difficulty to generalize to ViTs, small interpolation and extrapolation ability to sub-network size unseen during training, impact of sustained activation of the smallest sub-network that coupled with the sandwich rule for selecting sub-networks during training leads to an over-emphasis on it at the expense of the other sub-networks.\nThe authors propose two simple strategies towards such a method for ViTs: (i) Isolated activation that separates the smallest sub-network from the other sub-networks; (ii) scale coordination consisting of a set of heuristics to ensure that each sub-network gets simple, accurate and stable learning objectives: (a) progressive knowledge transfer from larger networks to smaller ones in gradual decrease of capacity, (b) stable sampling of intermediate width ratios to avoid large variations in capacities in the sandwich, (c) noise calibration, essentially a composite loss of supervised cross-entropy and distillation from the bigger sub-network.\nScala is evaluated on several settings on the ImageNet-1k dataset with ViT-Ti/S/B, hybrid CNN-ViT architectures, lightweight networks, but also for dense prediction on semantic segmentation and self-supervised pre-training with interesting results. The baselines used here were Separate Training, Autoformer and US-Net.\n\n### Significance\n- the paper deals with a challenging and useful task for deploying ViT models into different operational settings with different computational constraints without retraining or distilling specific architectures each time\n\n- although a computational overhead is expected for such methods, the main components of Scala are relatively simple and make sense \n\n- Scala achieves good performance with a higher boost in the low parameter regime\n\n### Originality\n- the proposed contributions are somehow incremental as they are improving the US-Net prior work, but do have some novelty and they are simple.\n\n### Clarity\n- in general this work is well argued and easy to follow. The authors construct well the arguments regarding the challenges when going from CNNs to ViT with US-Net and how to construct their Scala approach.\n\n### Quality\n- the paper offers several experiments and studies in the main paper and in the appendix (longer training, fast interpolation, ablation of components) that are well thought and improve the understanding of the method.\n\n- I appreciate the experiments beyond image classification, on semantic segmentation, as well as the self-supervised pretraining and subsequent linear probing on a downstream task.\n\n### \"Scalable\" naming\n- I think that the framing of the method as _\"scalable representation learning\"_ is quite confusing as it's not representative for this task, it's not a name used by other related works. Importantly, it can be easily mistaken with most works that use \"scalable\" for depicting the ability/property of a system (method, architecture) to handle a growing amount of data, parameters, and the potential to accommodate this growth. In other words \"scalable\" is rather used for depicting scaling up, whereas this work depicts the property of the proposed approach to accommodate sub-networks of different lower sizes/scales from the original.\n\n- maybe other names us in related works would be more appropriate here: slimmable, elastic, modular, flexibile inference, etc.\n\n\n### Limited baselines and related work\n- some relevant related works dealing with tranformer networks are either just briefly mentioned, e.g., Matformer [18], or not mentioned at all, e.g., SortedNet [a], Early exit [b]\n\n- One of the main baselines, US-Net is originally designed for CNNs and, as the authors mentioned, moving to ViTs is not straightforward. Matformer is criticized for the limited number of models produced, but can be considered in the several experiments with X=4 sub-networks. Matformer and SortedNet could be included in the experimental evaluation\n\n\n### Scope of experiments\n- While the authors considered several settings for computer vision tasks (image classification, segmentation, light architectures), transformer architectures are also encountered in NLP (as mentioned by the authors in L56). In such cases the original models can have much more parameters and elastic inference for lower computational budgets would be of high interest.\n\n- It would be useful to include an experiment from NLP in the style of those from Matformer or SortedNet.\n\n- The biggest architectures used here is a ViT-B (~86M params). Extending experiments to larger modern architectures would be definitely useful and interesting.\n\n### Clarity\n- it's not always clear in the text and cost estimations that Scala needs a pre-trained full network as teacher for the distillation. This add some cost in compute and time in the end. Besides it's not clear whether US-Net also needs and uses a pre-trained teacher in the reported results.\n\n- in the intro, the authors mention that they address the issue of minimal interpolation ability of ViTs. Results from Table 2 show that the interpolation abilities of ViTs with Scala are still very low. However the fast interpolation strategy from $\\S$A.2 is actually interesting for practical settings even though not fully solving this issue. It might be worth moving up in the main paper.\n\n- the idea of the transferability experiment ($\\S$5.4) with DINOv2 is nice. From the description it is not clear whether DINOv2 was used as teacher for the distillation or also as supervised pre-training on ImageNet-1k? Or the pre-training on ImageNet-1K was done in a supervised manner as in previous experiments?\n\n- the ablation experiment from Table 6 is nice. However the presentation with removing one component at once offers only a partial understanding of the contributions of each module. Different configurations with different modules in on/off mode should give a better global understanding.\n\n\n\n**References:**\n\n[a] Valipour et al., SortedNet: A Scalable and Generalized Framework for Training Modular Deep Neural Networks, arXiv 2023\n\n[b] Xin et al., Deebert: Dynamic early exiting for accelerating bert inference, ACL 2020\n\nThis paper takes an interesting direction of study: how to train ViTs such that a fine-grained elasticity in terms of sub-network sizes is possible at runtime.\nThe proposed Scala approach is well described, makes sense and achieves good results in several computer vision settings.\n\nI do have a few concerns related to the phrasing of this type of works (\"scalable representation learning\") which can be confusing, the absence of larger architectures and of recent relevant baselines.\nMy current rating is mildly positive (rather on the fence though) and I'm looking forward for the rebuttal.\n\nHere are a few questions and suggestions that could be potentially addressed in the rebuttal or in future versions of this work (please note that suggested experiments are not necessarily expected to be conducted for the short rebuttal period):\n\n1. Please clarify the points raised in the clarity section: use of teacher model for Scala and US-Net, implementation of transferability experiment. \n\n2. Comparison of training cost between Scala (including teacher training), US-Net and Separate Training baselines.\n\n3. Add a discussion of differences and when possible experimental comparison with Matformer and SortedNet baselines on image classification or semantic segmentation.\n\n4. Extension of experiments to NLP architectures and tasks in the style of SortedNed, Matformer" } ]
zIr2QjU4hl
Bridging Model-Based Optimization and Generative Modeling via Conservative Fine-Tuning of Diffusion Models
AI-driven design problems, such as DNA/protein sequence design, are commonly tackled from two angles: generative modeling, which efficiently captures the feasible design space (e.g., natural images or biological sequences), and model-based optimization, which utilizes reward models for extrapolation. To combine the strengths of both approaches, we adopt a hybrid method that fine-tunes cutting-edge diffusion models by optimizing reward models through RL. Although prior work has explored similar avenues, they primarily focus on scenarios where accurate reward models are accessible. In contrast, we concentrate on an offline setting where a reward model is unknown, and we must learn from static offline datasets, a common scenario in scientific domains. In offline scenarios, existing approaches tend to suffer from overoptimization, as they may be misled by the reward model in out-of-distribution regions. To address this, we introduce a conservative fine-tuning approach, BRAID, by optimizing a conservative reward model, which includes additional penalization outside of offline data distributions. Through empirical and theoretical analysis, we demonstrate the capability of our approach to outperform the best designs in offline data, leveraging the extrapolation capabilities of reward models while avoiding the generation of invalid designs through pre-trained diffusion models.
https://openreview.net/pdf/208379a521961503552a6647a7533a7037e81262.pdf
[ { "confidence": 2, "rating": 6, "review_id": "Dy0r2GPpf1", "review_text": "This paper presents a conservative fine-tuning method called BRAID, which integrates the strengths of diffusion models and model-based optimization (MBO) to improve the performance of pre-trained diffusion models on offline datasets. BRAID optimizes a conservative reward model that includes penalties outside the offline data distribution to prevent overoptimization and generate valid designs. The approach is validated through empirical and theoretical analyses, demonstrating its ability to outperform the best designs in offline data while avoiding the generation of invalid designs. The paper also discusses the method's effectiveness compared to existing conditional diffusion models and traditional MBO techniques, with experiments showcasing its superiority in biological sequence and image generation. The authors acknowledge the limitations of their study, particularly in model selection and hyperparameter tuning, and suggest future research directions.\n\n* BRAID incorporates a conservative approach to fine-tuning diffusion models, which includes penalization terms that discourage the model from generating designs outside the distribution of the offline data. This conservative strategy is effective in preventing overoptimization and ensuring the validity of the generated designs.\n\n* The method is supported by both theoretical analysis and empirical results. Theoretically, it provides a regret guarantee, ensuring that the fine-tuned models can outperform the best designs in the offline data. Empirically, it has been validated through experiments across various domains, such as biological sequences and images, demonstrating its ability to generate high-quality designs.\n\n* Difficulty in tuning hyperparameters without online data interaction.\n* Reliance on accurate reward and diffusion models for effective performance.\n* Theoretical results depend on certain idealized assumptions that may not hold in all cases.\n* Can you compare the methods with other SOTA offline RL methods to illustrate your proposed augmented methods more effective than the SOTA offline RL methods? I think this paper is very relevant to some offline RL methods, such as ReDS[1], A2PR[2], CPED[3], SCQ[4]. It is not required that experimental comparisons must be given, but at least add some discussion with these methods to the paper.\n\nReferences:\n\n[1] Singh, Anikait, et al. \"ReDS: offline reinforcement learning with heteroskedastic datasets via support constraints.\" Proceedings of the 37th International Conference on Neural Information Processing Systems. 2023.\n\n[2] Liu, Tenglong, et al. \"Adaptive Advantage-Guided Policy Regularization for Offline Reinforcement Learning.\" In International Conference on Machine Learning (ICML). PMLR, 2024.\n\n[3] Zhang, Jing, et al. \"Constrained policy optimization with explicit behavior density for offline reinforcement learning.\" Advances in Neural Information Processing Systems. 2023\n\n[4] Shimizu, Yutaka, et al. \"Strategically Conservative Q-Learning.\" arXiv preprint arXiv:2406.04534 (2024).\n\n* The methods will need more samples, as shown in the pseudo-code of the algorithms 2 Direct Back Propagation (General case), which may bring more computational burden. Meanwhile, the methods train use two diffusion model to obtain the policy. Can you give me some experiments to show the computational burden? \n* Have the authors considered alternative generative models such as GANs or VAEs, and can they provide a comparative analysis of performance and resource usage?\n* How to obtain the $\\hat{g}$ in the pseudo-code of Algorithm 1 BRAID? \n* This methods is related to offline reinforcement learning. So can you give me some experiment comparisions with the SOTA offline RL methods ?" }, { "confidence": 3, "rating": 7, "review_id": "gVNShYNhh3", "review_text": "The paper tackles the task of black box optimization in an offline setting. Given a pretrained diffusion model, they first train a surrogate model on the offline data and use it to tilt the diffusion model distribution via finetuning it. The authors distinctly focus on an uncertainty quantification based procedure to bias the diffusion model tilting toward regions where the reward is high and the reward uncertainty is low while not tilting toward regions with high uncertainty. Experiments are carried out on a reasonable set of diverse tasks.\n\nThe paper introduces a small specific challenge and address it well with a reasonable approach and good motivations. The potential impact of the method may be small but it is neat, educational, and should be useful in many cases. I recommend acceptance.\n\n1. Identifying a crucial non-obvious overlooked challenge and specifically pinpointing it. The authors identify that previous work on tilting diffusion models with reward models misses to incorporate tilting less toward regions of the reward function in which it has high uncertainty but high reward. Instead, we should only tilt to the regions that have high reward and high certainty to avoid optimizing toward adversarial examples. (The fact that the finetuned diffusion model will steer away from the pretraining distribution seems like a less relevant insight)\n2. The authors identify a relevant overlooked problem in finetuning diffusion models and bring standard techniques from uncertainty quantification into the field to address it in a reasonable fashion. They do not overcomplicate things and their technique could be valuable to several researchers in the area. \n3. The authors prove that the training procedure yields the desired distribution. I have no comments regarding the value/insightfulness of the proof. Maybe other reviewers have a stronger opinion about the relevance.\n4. The authors evaluate their method on a very diverse set of experiments that includes discrete DNA sequence generation, and image generation. The results are convincing and demonstrate the central empirical claim that out of distribution generation is a problem and is effectively avoided with the proposed conservative reward model fine tuning.\nMinor:\n1. Interesting snippets of insights. The authors point out interesting relationships and connections along the way which are non-obvious and well placed for putting their motivations into context.\n2. Exceptional clarity in writing. The paper lays out the task in its precise specification and covers required concepts and related work in equal clarity.\n\n1. I would say that the insights in terms of methodological novelty are on the moderate side. The ideas are simple and good, which is appreciated, but the level on which the conceptural changes operate are low level (a tweak to diffusion model tilting) and thus limited in impact. However, it is certainly a good thing to have.\nVery hard to address and not a must have for ML conferences:\n1. Evaluations are inherently limited in their computational nature and the conclusions that can be drawn for the procedures effectiveness in biological sequence optimization is small. Do you authors disagree with this in any way?\n\n1. Do I understand correctly that theorem 2 only states that the finetuned diffusion model will incur lower regret than the pretrained diffusion model? Why is that useful and would we not rather be interested in relations between using your additional uncertainty quantification bias for the reward model based finetuning and not using it?\n2. It seems to me that reward models are often very poor in scientific applications (more so than the generative models that can be trained on a lot more data). Does this mean that their uncertainty estimates are also bad and your method might not provide any improvements in these cases?" }, { "confidence": 4, "rating": 5, "review_id": "ZeCXOYk38w", "review_text": "This paper proposes a conservative approach for fine-tuning diffusion models with a reward model learned from offline data. Specifically, the ideas are two-fold: The first idea is to replace the reward model with a conservative estimate based on classical generalization bounds. The second idea is to leverage the KL divergence to force the optimized distribution to not deviate too far from the pretrained model. Experiments and theoretical results show the efficacy of the proposed method in fine-tuning diffusion models without over-optimization.\n\n1. The proposed method is well-presented, and the motivation behind the algorithm is interesting. The over-optimization problem is indeed critical when fine-tuning diffusion models with learned rewards.\n\n2. Extensive experimental results show the efficacy of the proposed method in improving the reward model while avoiding reward over-optimization.\n\n1. Leveraging generalization bounds via kernel RKHS and bootstrap is interesting, but I doubt their practicality for real applications. Firstly, the RKHS bound is usually too conservative to be useful, while the computational cost for the bootstrap method is pretty high since one has to train the model from scratch multiple times. As far as I can tell, the reward models used in the experiments are mainly single-layer MLPs, and it is doubtful whether this approach is useful when the reward model needs to be a larger model.\n\n2. Another problem with the conservative estimator of the reward models is that it is unclear whether it is useful given the current experimental results. On one hand, KL regularization is a widely-known technique for preventing over-optimization in diffusion models and is thoroughly studied in existing works, so it is certain that the KL regularization term will help. On the other hand, the proposed algorithm mixes both the conservative reward estimator and the KL regularization term together, making it unclear which part is playing the role in avoiding over-optimization. My guess is that, for the most part, only the KL regularization term is effective in the end.\n\nSTRL methods like AlignProp and DRaFT can work with ODE samplers, which are more commonly used in practice than SDE samplers. However, the method proposed in this work, due to the use of entropy regularization, can only adopt SDE samplers. I wonder if it is possible to design a regularization term for ODE samplers. Could the authors share some insights on this point?" }, { "confidence": 3, "rating": 7, "review_id": "w2ThrUBKm5", "review_text": "1) This paper analyzed the two mainstream angles of computational design.\n2) Proposed a hybrid one that offline fine-tunes generative models.\n3) Conduct experiments on two tasks to show the performance of their method.\n\n1) Sufficient theoretical analysis and detailed preliminaries.\n2) The idea is straightforward.\n3) The method is comprehensive.\n\n1) In the introduction, the advantages and disadvantages of the two mainstream issues are not fully analyzed.\n2) Insufficient metrics evaluation for image generation task.\n\nN/A" } ]
zGN0YWy2he
Scene Graph Disentanglement and Composition for Generalizable Complex Image Generation
There has been exciting progress in generating images from natural language or layout conditions. However, these methods struggle to faithfully reproduce complex scenes due to the insufficient modeling of multiple objects and their relationships. To address this issue, we leverage the scene graph, a powerful structured representation, for complex image generation. Different from the previous works that directly use scene graphs for generation, we employ the generative capabilities of variational autoencoders and diffusion models in a generalizable manner, compositing diverse disentangled visual clues from scene graphs. Specifically, we first propose a Semantics-Layout Variational AutoEncoder (SL-VAE) to jointly derive (layouts, semantics) from the input scene graph, which allows a more diverse and reasonable generation in a one-to-many mapping. We then develop a Compositional Masked Attention (CMA) integrated with a diffusion model, incorporating (layouts, semantics) with fine-grained attributes as generation guidance. To further achieve graph manipulation while keeping the visual content consistent, we introduce a Multi-Layered Sampler (MLS) for an "isolated" image editing effect. Extensive experiments demonstrate that our method outperforms recent competitors based on text, layout, or scene graph, in terms of generation rationality and controllability.
https://openreview.net/pdf/66bc4339c157f7e9cfc224307ac92ad79e98a4b8.pdf
[ { "confidence": 4, "rating": 8, "review_id": "6yzbDS3EEG", "review_text": "This paper employs scene graph for image generation. Different from the previous methods, they employ the generative capabilities of variational autoencoders and diffusion models in a generalizable manner, compositing diverse disentangled visual clues from scene graphs. The authors propose a semantics-Layout Variational AutoEncoder to jointly derive layouts and semantics from scene graph. Then they develop CMA integrated with a diffusion model. They also introduce the multi-layered sampler for achieving graph manipulation. Experiments show that the method outperforms existing methods.\n\n1. The paper address the problems in existing methods well. Existing methods in the field of scene graph to image generation mainly depends on the layout or semantics. Using one of them may cause some problems. Inspired by these phenomenons, the authors propose the method to jointly considering the layout and the semantics. What's more, the techniques used in the framework are novel enough. \n2. The authors conduct plenty of experiments. The ablation studies support the motivations.\n\nThe only weakness I found is that the authors should reorganize the paper carefully. The writings is not so clear in some sections. For example, the multi-layered sampler section is too abstract to be understood.\n\nNo." }, { "confidence": 4, "rating": 6, "review_id": "Yp80tB6uTe", "review_text": "The paper proposes DisCo (Disentangled Compositional image generation), which integrates both layout and semantic information derived from scene graphs to improve the quality and controllability of generated images. In particular, DisCo has three main components: Semantics-Layout Variational AutoEncoder (SL-VAE) for disentangling spatial layouts and semantics, Compositional Masked Attention (CMA) for fine-grained attribute guidance, and Multi-Layered Sampler (MLS) for object-level graph manipulation. Extensive experiments demonstrate that DisCo outperforms state-of-the-art methods in generating rational and controllable images from text, layout, and scene graph conditions.\n\n1. The motivation is clear. The idea of disentangling layout and semantics from scene graphs is novel.\n2. DisCo outperforms recent methods in both fidelity and diversity of image generation, as evidenced by the IS and FID scores. Overall, it enhances the generation diversity and controllability.\n3. Extensive experiments and ablation studies have demonstrated the effectiveness and the contribution of each component.\n\n1. The increased inference cost of DisCo (Table 7). In particular, CMA mechanism might increase the computational cost, which may limit the method's scalability and efficiency, especially for large-scale applications. Moreover, since diffusion models are already quite large, the additional AutoEncoders (Lines 129-130) may result in more parameter and memory overhead.\n2. DisCo requires expensive training, e.g. 4 A100 GPUs, as indicated in Lines 202-203. With more models releasing recently, this technique might be not scalable.\n3. The image quality looks better with this proposed method. However, as metrics today cannot always reflect the real image quality, it would be more convincing to conduct a user study, e.g. votes, to quantify the advantage of DisCo compared to previous works.\n\nSee the weakness." }, { "confidence": 4, "rating": 6, "review_id": "o6rsgqdca4", "review_text": "This paper presents \"DisCo,\" a novel framework for generating complex images from structured scene graphs. Unlike traditional text-to-image or layout-to-image methods, DisCo utilizes a Semantics-Layout Variational AutoEncoder (SL-VAE) to disentangle and generate diverse spatial layouts and interactive semantics from scene graphs. It incorporates these elements using a Compositional Masked Attention (CMA) mechanism within a diffusion model, enhancing generation control and rationality. The framework also introduces a Multi-Layered Sampler (MLS) for flexible, graph-based image editing, preserving visual consistency while manipulating object attributes and positions.\n\n1. Introduces innovative methods for disentangling and integrating spatial and semantic information from scene graphs, which is a novel approach in image generation. \n2. Offers significant improvements in image generation from complex scene graphs, enhancing both the fidelity and controllability of generated image\n\n1. The paper lacks quantitative comparisons with closely related baselines, such as R3CD, which could provide a more comprehensive evaluation of the model's performance. Inclusion of these comparisons could help validate the proposed advantages of DisCo over existing methods, particularly in handling complex scene graph-to-image generation tasks.\n2. Some generated images, particularly those highlighted in Figure 10, exhibit unnatural aspect ratios and stretched elements, suggesting issues with the model’s handling of object proportions and spatial embeddings. \n3. It would be great to discuss the scalability aspects, particularly how the proposed model handles graph sizes that exceed typical training configurations.\n4. how the model performs with imperfect or noisy scene graphs, which are common in automatically extracted data.\n\n1. The paper presents results on standard benchmarks. However, can you provide insights or preliminary results on how the model performs across datasets with higher variability in object complexity and scene density? \n2. Why were certain closely related baselines omitted from quantitative comparisons? Could inclusion of these baselines provide a more comprehensive evaluation?" }, { "confidence": 3, "rating": 5, "review_id": "UScqubBWfc", "review_text": "This paper proposes a method that uses a scene graph and integrates variational autoencoders (VAEs) and diffusion models to address complex scene generation. Specifically, a Semantics-Layout Variational AutoEncoder (SL-VAE) is used to derive diverse layouts and semantics from the scene graph, while a Compositional Masked Attention (CMA) combined with a diffusion model incorporates these elements as guidance. Additionally, a Multi-Layered Sampler (MLS) is introduced for isolated image editing. Experiments show that this approach outperforms recent methods in generation rationality and controllability.\n\n1. This paper considers an important issue in text-to-image generation realm.\n2. The structure design in Section 3 makes sense.\n3. The experimental results shown in Table 1 and 2 show the effectiveness of this method.\n\n1. My main concern is the practical application of this method. As we all known, scene graph building is not a trivial task, but you don't explain detail in the paper how to construct an exact scene graph. In addition, during inference, the prompt proposed by uses may be non-standard so that building a scene graph may be more difficult. \n2. Besides, recent SOTA models, e.g. DALLE3, stable diffusion 3 try to solve the complex generation task by large-scale fine-grained dataset construction. How do you compare your methods with these data-centric methods. The authors should spend more space discussing the issues.\n\nPlease see the weakness" } ]
zFHJUSTZka
Direct Language Model Alignment from Online AI Feedback
Direct alignment from preferences (DAP) methods, such as DPO, have recently emerged as efficient alternatives to reinforcement learning from human feedback (RLHF), that do not require a separate reward model. However, the preference datasets used in DAP methods are usually collected ahead of training and never updated, thus the feedback is purely offline. Moreover, responses in these datasets are often sampled from a language model distinct from the one being aligned, and since the model evolves over training, the alignment phase is inevitably off-policy. In this study, we posit that online feedback is key and improves DAP methods. Our method, online AI feedback (OAIF), uses an LLM as annotator: on each training iteration, we sample two responses from the current model and prompt the LLM annotator to choose which one is preferred, thus providing online feedback. Despite its simplicity, we demonstrate via human evaluation in several tasks that OAIF outperforms both offline DAP and RLHF methods. We further show that the feedback leveraged in OAIF is easily controllable, via instruction prompts to the LLM annotator.
https://openreview.net/pdf/a5ce202a1af2c8372842c13915d120ee5e1306b2.pdf
[ { "confidence": 3, "rating": 4, "review_id": "yo7MTsPaP2", "review_text": "This paper propose OAIF, an online method to align language model with human preference where feedback from language models serve as a surrogate of human feedback. The key of OAIF is to use online generated preference pair along the training process. Experiment results shows that, by switching offline preference dataset to online dataset labeled by other language models, the generated responses are more aligned with human preference.\n\nThe strengths of the paper are listed below:\n\n1. This paper introduces OAIF, which is featured by using on-the-fly generated preference pairs and AI-provided labels.\n2. The author conducted experiment on various direct alignment methods and the results consolidate the claim by the authors\n\nMy questions and concerns are listed as follows:\n\n1. My first concern is regarding the novelty of the paper. It seems that the language model annotator is essentially a preference model. Therefore, OAIF can be seen as a method of online direct alignment algorithm with access to a preference model. The author mentioned several previous work with on-policy generation and online feedback but in need of a reward model. How is OAIF different from different from these method if we simply plug in the language model annotator as the reward model in their methods?\n2. At line 118 the author pointed out that RM might suffer from distribution shift because the training data of RM might not share the same distribution with $\\pi_\\theta$. However, it seems to me that using language model as preference annotator cannot bypass this problem since the language models' pretraining corpus or the finetuning corpus relating to preference labeling has a similar distribution with $\\pi_\\theta$.\n3. How is OAIF's performance compared to other online methods like RSO and IterativeDPO? I think that these methods might also be included as baselines since reward model can also be taken by AI annotators.\n\nSee weakness Section" }, { "confidence": 4, "rating": 4, "review_id": "Rk4byMjJBr", "review_text": "This work extends offline preference learning methods, i.e., DPO, to a online variant by using LLM as annotator to collect new datasets for further preference learning. The results show that Direct alignment from preferences (DAP) methods win-rate over the offline methods beyond 60%.\n\n1. Paper is good writing, easy to follow.\n2. This online variant provides demonstrates significant performance improvements over offline DAP and RLHF methods through comprehensive evaluations.\n\n1. The improvement by extending online is under expectation as it introduces more datasets and training budgets. \n2. The contribution is limited. The only difference compared to the prior method is substituting the reward model of prior methods (Iterative DPO) to LLMs, though I agree the explicitly static reward model may introduce the model distributional shift problem.\n3. Some drawings or comparisons are not fair enough. (a). Table 1 explicitly avoids the limitation of this method by leveraging the feedback from LLM, though it is another variant of the \"reward model\". (b). Figure 3, the training step is not an approximate x-axis as the online DPO variant has been heavily fine-tuned offline. \n4. There are no theoretical foundations, or new plausible explanations, aside from more datasets and the online budget, for the further improvement of the online variant DPO.\n\nn/a" }, { "confidence": 4, "rating": 5, "review_id": "wqV5TcvFBK", "review_text": "This paper applies direct alignment from preferences (DAP) methods, particularly DPO, to online settings where responses are sampled in an on-policy manner and feedback is provided by the LLM annotator in real-time. Extensive experiments demonstrate the effectiveness of these simple ideas.\n\nThe paper is well-written, with detailed explanations of introduced definitions and discussions with existing methods. \n\nThe experiments are well-designed, supporting the main idea of the paper. The proposed prompt-controllable approach is particularly commendable.\n\nThe rationale for why on-policy learning brings performance gains is not well clarified. The cited reference [1] does not provide strong support for this claim. There is no experimental evidence that on-policy sampling encourages exploration. \n\nMost experiments are conducted with the closed-source LLM Palm; evaluating state-of-the-art open-sourced LLMs would enhance generalizability. \n\nIt is unclear how much of the performance gains are due to on-policy sampling versus online feedback. \n\nThe reasons why utilizing online on-policy data can avoid overfitting and improve performance should be further analyzed and discussed.\n\nReferences:\n[1] Lambert, N., Wulfmeier, M., Whitney, W., Byravan, A., Bloesch, M., Dasagi, V., Hertweck, T., and Riedmiller, M. The challenges of exploration for offline reinforcement learning. arXiv preprint arXiv:2201.11861, 2022.\n\nIs it correct to categorize RSO and iterative DPO as on-policy generation in Table 1? \n\nWhat new opportunities and challenges arise when applying DAP to online settings? Did you encounter common issues of DAP methods, such as overfitting, in the online setting? What are the differences in these issues between online and offline settings? \n\nWhere are the experimental results to support the superiority of using LLMs over RMs to provide online feedback in Line 267?" }, { "confidence": 4, "rating": 3, "review_id": "9QbqVUOaL1", "review_text": "The paper presents a new method called Online AI Feedback (OAIF) for direct alignment from preferences (DAP) that addresses the limitations of existing DAP methods, which rely on static, offline feedback datasets. By using an LLM as an online annotator to provide real-time feedback during each training iteration, OAIF ensures the alignment process remains on-policy and adapts dynamically to the evolving model. Through human evaluations across various tasks, the authors demonstrate that OAIF outperforms traditional offline DAP and reinforcement learning from human feedback (RLHF) methods.\n\nOAIF uses LLMs for preference annotation, eliminating the need for a separate reward model and large datasets typically required for RLHF methods. It introduces a new way to address off-policy issues in policy optimization, a significant problem in traditional DPO methods.\n\nThe paper is well-written and easy to understand. OAIF outperforms offline DPO and other offline RLHF methods.\n\n1. The idea is straightforward but lacks theoretical proof. The proposed method combines DPO and AI feedback, unlike the constitutional AI paper, which integrates PPO with AI feedback. However, this point is minor. Given the abundance of concurrent work [1-7], the authors should further develop the theoretical analysis of their approach to strengthen their method. \n\n2. Different methods should use an equal amount of training data. In the second epoch of onlineDPO, although the prompts remain the same as in the first epoch, the responses and rank information differ due to online generation.\n\n3. Recent results on Reward Bench indicate that small reward models are more effective than LLM critiques. The iterative DPO methods are similar to OAIF DPO. A performance comparison between OAIF and various iterative DPO methods using cheaper reward models, as both address the off-policy issue, is essential and should be included.\n\n[1] Iterative Preference Learning from Human Feedback: Bridging Theory and Practice for RLHF under KL-Constraint\n\n[2] RS-DPO: A Hybrid Rejection Sampling and Direct Preference Optimization Method for Alignment of Large Language Models\n\n[3] RSO: Statistical rejection sampling improves preference optimization\n\n[4] Some things are more cringe than others: Preference optimization with the pairwise cringe loss. arXiv preprint arXiv:2312.16682\n\n[5] Hoang Tran, Chris Glaze, and Braden Hancock. 2023. Iterative dpo alignment. Technical report, Snorkel AI.\n\n[6] Self-rewarding language models. arXiv preprint arXiv:2401.10020\n\n[7] Is dpo superior to ppo for llm alignment? a comprehensive study. arXiv preprint arXiv:2404.10719.\n\nN/A" } ]
zDaD8zv8tG
A teacher-teacher framework for clinical language representation learning
In recent years, there has been a proliferation of ready-to-use large language models (LLMs) designed for various applications, both general-purpose and domain-specific. Instead of advocating for the development of a new model or continuous pretraining of an existing one, this paper introduces a pragmatic teacher-teacher framework to facilitate mutual learning between two pre-existing models. By leveraging two teacher models possessing complementary knowledge, we introduce a LIghtweight kNowledge alignmEnt (LINE) module aimed at harmonizing their knowledge within a unified representation space. This framework is particularly valuable in clinical settings, where stringent regulations and privacy considerations dictate the handling of detailed clinical notes. Our trained LINE module excels in capturing critical information from clinical notes, leveraging highly de-identified data. Validation and downstream tasks further demonstrate the effectiveness of the proposed framework.
https://openreview.net/pdf/897ef6718180dabcd9f755adb58c00cb26513df5.pdf
[ { "confidence": 4, "rating": 6, "review_id": "7hQTCoTndx", "review_text": "The paper introduces a novel teacher-teacher framework named LIghtweight kNowledge alignmEnt (LINE), which facilitates knowledge exchange between two pre-existing large language models (LLMs) to enhance clinical language representation. By leveraging complementary knowledge from general-purpose and domain-specific models, LINE aims to harmonize their knowledge within a unified representation space. The framework is validated through downstream tasks showing that the LINE model outperforms individual pre-existing models in understanding and processing clinical language. This approach allows for more efficient sharing of clinical pretrianed models.\n\n1. **Clarity and Structure**: The paper is well-written and structured, offering a clear motivation for the study. This makes it accessible and engaging for readers, facilitating a deeper understanding of the proposed framework.\n\n2. **Novelty and Utility**: The proposed teacher-teacher framework, LIghtweight kNowledge alignmEnt (LINE), is innovative, providing a pragmatic approach to integrating the strengths of different pre-trained models. This methodology is particularly notable for its potential to enhance clinical language representations without the need for developing new models from scratch.\n\n3. **Usability and Efficiency**: The framework is user-friendly and does not require retraining of the original models, which significantly reduces computational overhead and simplifies its adoption in real-world applications.\n\n4. **Empirical Validation**: The experimental results demonstrate stable and significant improvements over existing methods, substantiating the efficacy and value of the proposed framework in practical settings.\n\n**Data Requirements and Availability**: A notable limitation of the proposed LINE framework is its dependency on well-aligned and specific types of data sources, which may not be readily available or commonly found in practical settings. For example, integrating data from disparate modalities like CT and MRI requires the availability of cases that include both types of data, which may not always be feasible. This requirement could limit the framework's applicability across different clinical or real-world scenarios where such aligned data sets are scarce.\n\n1. See weakness, under such situation is it possible to apply your method?" }, { "confidence": 4, "rating": 4, "review_id": "dcaSJleU0M", "review_text": "This paper presents an interesting topic on LLM but the importance of this problem is not convincing and the methods here is not novel.\n\nThe teacher-teacher concept is novel to some extent.\n\n1. The problem's importance is not significant.\n2. There lacks the inclusion of SOTA models like llama, gpt, etc.\n3. The results improvement is limited as shown in Tab. 4,5.\n4. The Fig1 lacks details of the proposed method.\n\nNone." }, { "confidence": 4, "rating": 3, "review_id": "Ttuoph39t1", "review_text": "The authors look to address the question representational alignment between language models trained on different textual domains to improve performance of potentially both models on their out-of-domain text. The authors propose to specifically investigate this in the context of EHR text, and choose as their models for this CODER and BGE. They propose a contrastive loss, and additionally propose to train an alignment module/project layer rather than end-to-end training of the teacher models.\n\nThe concept is solid and well implemented and motivated. I wonder if it would be possible to further generalize it beyond medical text - which it is restricted too due to the reliance on alignment with extracted medical concepts by NILE. The discussion mentions this possibility, but it would be exciting to see it in action.\n\nThe clinical NLP benchmarks are particularly appropriate for the task.\n\nSome of the benchmark tasks are older, and the comparisons could be more robust. Some ablations are missing.\n\nThe project's scope is incredibly narrow: encoder models on extractive medical tasks. While the authors claim that the technique is broadly generalizable, it would be nice to see proof-of-concept. \n\nThe work seems to me to fit more into the realm of domain adaptation rather than learning by alignment. We aren't learning novel models here via alignment (like CLIP), but rather, pushing the learned representations of two different models into a common space. I'd strongly consider citing and discussing DA literature for this paper.\n\nWould it be possible to further test the LINE model on other, more varied, benchmarks to see how well those newly aligned representations perform? \n\nIt could also be exciting to explore this with generative models.\n\nWere alternative frameworks considered for the concept alignment? Why not align directly in embedding space without the grounding concepts? This would be an interesting ablation to perform to assess the significance of the extracted concepts on the underlying learned representation. Conversely, could you just fine-tune the generalist model on the extracted concepts as a means of medically aligning it? How well does that perform? \n\nWhy not also compare the BGE-->CODER projection (inverse direction of the BGE-->CODER projection)?\n\nIf it isn't technically feasible to do in an end-to-end fashion, perhaps this could be approximated by tuning LoRA on the base models?" }, { "confidence": 3, "rating": 5, "review_id": "iEqWNqUn87", "review_text": "This paper introduce a teacher-teacher framework for clinical language representation learning. The framework uses a lightweight knowledge alignment module to harmonize the knowledge of both models within a unified space, which including two steps: The first step involves initial training to define residuals and capture complementary information. The second step focuses on refining the alignment by recovering residual information. The framework was validated using the MIMIC-IV database, where the LINE model outperformed baseline models in aligning concept and text representations.\n\nThe main contribution of the work is proposed teacher-teacher framework, and training strategy.\n\n- Originality: The teacher-teacher framework is very interesting as it enables mutual enhancement between two pre-existing LLMs, a unique departure from traditional approaches that typically involve training a new model or continual pre-training of existing models. This innovative method opens new avenues for leveraging existing resources to achieve superior performance.\n\n- Quality: The paper demonstrates high quality through its validation using the MIMIC-IV database, a well-known and respected dataset in the clinical domain, adding significant credibility. Additionally, the LINE model's performance is compared against several strong baseline models, showing clear improvements across various downstream tasks, thus underscoring the robustness and reliability of the proposed framework.\n\n- Clarity: The paper is well-written and clearly structured, making it accessible to both domain experts and those new to the field. The introduction provides a comprehensive background and motivation for the proposed framework, while the methodology section offers detailed descriptions of the teacher models and the LINE module.\n\n- Significance: The practical applications and potential impact on the clinical domain shown the significance of this work. The teacher-teacher idea has substantial implications for more advancing NLP applications in other filed.\n\n1. Figure 1 is somewhat confusing. From my understanding, Teacher 1 should be a strong LLM, while Teacher 2 should be an LLM with existing domain-specific knowledge. However, Figure 1 gives the impression that Teacher 2 serves merely as a database, making the framework resemble a RAG framework.\n\n2. Although the paper compares the LINE model against several strong baseline models, it lacks a detailed comparison with the latest strong general LLMs, such as GPT-4, which should be considered a strong baseline. Consider adding a small comparative analysis or stating the advantages of the framework over simply using GPT-4.\n\n3. The paper underscore the practical value of the framework, but it does not sufficiently address potential practical implementation challenges, such as computational requirements and scalability when applied in real-world clinical settings.\n\n1.From Figure 1, if Teacher 2 only serves to provide domain-specific knowledge, why not implement a RAG framework, which is training-free and potentially more reliable?\n\n2. Have you addressed potential hallucination issues? Could one teacher potentially mislead the other during the knowledge exchange process?\n\n3. What are the potential computational and scalability challenges of implementing the teacher-teacher framework in real-world clinical settings? How do you propose to mitigate these challenges?\n\n4. How can regulatory mechanisms be incorporated into the framework for safety?" }, { "confidence": 4, "rating": 7, "review_id": "jVB6lZzQob", "review_text": "The paper proposes a mutual learning framework, called LINE, between two pre-existing LLMs in the healthcare domains. By harmonizing the knowledge of two distinct LLMs into a unified representation space, the model achieves better performance on intrinsic and extrinsic downstream evaluations of clinical tasks.\n\nClear motivation. Overall well written.\n\nThe methodology was reasonably designed to map representations from two distinct LLMs into a unified representation space.\n\nThe method achieves better performance on downstream clinical tasks.\n\n1. Only two LLMs (BGE and CODER) were aligned by LINE. It is unclear if LINE will work on combinations of other LLMs. \n\n2. LINE make downstream predictions based on clinical concepts only, rather than the full context. The concepts themselves can be negated, historical and hypothetical in context, but the proposed method does not seem to consider this.\n\n1. Why was NILE selected? Have any other extractors been compared? Does the selection of extractors have a significant impact on results?\n\n2. Line 222, which contrastive loss function was used eventually?" } ]
zDYXdR3ClP
UIR-LoRA: Achieving Universal Image Restoration through Multiple Low-Rank Adaptation
Existing unified methods typically treat multi-degradation image restoration as a multi-task learning problem. Despite performing effectively compared to single degradation restoration methods, they overlook the utilization of commonalities and specificities within multi-task restoration, thereby impeding the model's performance. Inspired by the success of deep generative models and fine-tuning techniques, we proposed a universal image restoration framework based on multiple low-rank adapters (LoRA) from multi-domain transfer learning. Our framework leverages the pre-trained generative model as the shared component for multi-degradation restoration and transfers it to specific degradation image restoration tasks using low-rank adaptation. Additionally, we introduce a LoRA composing strategy based on the degradation similarity, which adaptively combines trained LoRAs and enables our model to be applicable for mixed degradation restoration. Extensive experiments on multiple and mixed degradations demonstrate that the proposed universal image restoration method not only achieves higher fidelity and perceptual image quality but also has better generalization ability than other unified image restoration models.
https://openreview.net/pdf/415b69d9bcc6b12d03630a12da29d1e20b60dd12.pdf
[ { "confidence": 5, "rating": 3, "review_id": "vRHMsgaKqf", "review_text": "This paper introduces a universal image restoration framework UIR-LoRA based on multiple low-rank adapters. UIR-LoRA employs the pre-trained text-to-image diffusion model SD-turbo as the shared component. It utilizes a LoRA composing strategy based on the degradation similarity predicted by CLIP encoder to combine different LoRA modules. Experiments show the effectiveness of the proposed method.\n\n1. The proposed LoRA-based Universal IR method is easy to understand and follow.\n2. The motivation of this paper is very clear to me.\n\n1. UIR-LoRA adopts SD-turbo as the pre-trained backbone for image restoration. However, SD-tubo utilizes VAE with high compression rate to encode input images, resulting in severe detail distortion for image restoration. This issue has been widely discussed in recent published works [1,2]. However, the paper ignores this very important issue in the Method Section and only mentions the skip-connections for VAE in Line 223.\n2. The degradation-aware router seems to be unreliable. I do not believe that the original pre-trained CLIP Text Encoder can distinguish between different degradations through degraded text representations, such as \"rain\" and \"raindrop\". Therefore, DA-CLIP fine-tunes the original CLIP. But this paper doesn't contain any discussions about this.\n3. This paper does not provide complete technical details, such as how the LQ image is used as a condition for SD-turbo. Is ControlNet used, or is it directly concatenated? I do not see any information about this in the paper. \n4. Tab. 1 only reports the trainable Param for UIR-LoRA. I think it's necessary to report the overall Param of the model. In addition, the reported PSNR for DiffBIR is very low. Did the authors add skip-connections to the VAE of DiffBIR for a fair comparison?\n5. The visual results in Fig. 3 seem strange. The visual results of Restormer show noticeable artifacts between patches. Do the authors test Restormer using a tiled mode? As far as I know, using a single A100 GPU (Line 251), Restormer can restore the entire image without encountering out-of-memory issues.\n\n[1] Wang, Wenjing, et al. \"Zero-Reference Low-Light Enhancement via Physical Quadruple Priors.\" In CVPR, 2024.\n\n[2] Geng, Zigang, et al. \"Instructdiffusion: A generalist modeling interface for vision tasks.\" In CVPR, 2024.\n\n1. Authors should discuss the skip-connections for VAE in the Method Section with more details.\n2. Can authors provide the degradation prediction accuracy for more different predictions (eg, rain/raindrop)?\n3. Authors should provide more technical details of the proposed method.\n4. More experimental results and explanations should be included." }, { "confidence": 4, "rating": 6, "review_id": "u2nw9IubUv", "review_text": "This paper proposes to perform universal image restoration via multiple low-rank adaptation. The key idea is to leverage a pre-trained stable diffusion model as the shared component and transfer it to specific degradations with LoRA adaptation. A degradation-aware router is further proposed to generate weights for LoRA combination based on degradation confidence. In experiments, the authors evaluated their method on multi-degradation and mixed-degradation datasets and conducted several ablation experiments on their core components.\n\n- The idea of applying LoRA to a pre-trained SD for multi-task image restoration is promising and interesting.\n- The overall presentation is easy to follow.\n- The experimental results are good and the ablation studies make sense.\n\n- ControlNet is the most popular approach to adapting SD models to other tasks. I'm curious why the authors chose LoRA? As far as I know, LoRA is often used for large language models (with billions of parameters). It would be great to provide more detailed motivation in the introduction.\n- In line 123, maybe it's better to use \"concatenate\" or other operators instead of \"add\" to present the unified parameters. Here, the weight $s_k$ can be ignored.\n- Can the authors use other SD models as the base model? I believe applying LoRA to a multi-step diffusion process can further illustrate its efficiency.\n- In Eq. (4), $s_0 \\cdot M_k$ is used in both numerator and denominator, which seems weird and confusing.\n- The mixed degradation experiment is cool. It would be interesting if the authors could apply their model to real-world degraded images.\n- Line 45: proposed -> propose\n\nIn the degradation-aware router, have you finetuned the CLIP to align degraded images with correct degradation names? How do you choose the degradation names as the vocabulary bank?" }, { "confidence": 3, "rating": 5, "review_id": "YFZAHsFSgm", "review_text": "This submission proposes a transfer-learning based strategy to address challenges related to image-degradation restoration. The premise is that a pre-trained generative model can be employed as a common starting component for multiple degradation types, upon which distinct sets of trainable parameters (ie. low-rank adaptors) can be added in order to address specific-degradation restoration tasks. Mixed-degradation restoration is enabled through a top-K hyperparameter, that affords a mixture of (degradation) experts to be active. The experimental setup considers multi and mixed image restoration problems where average results are offered across image-degradation datasets and appropriate standard quantitative metrics, qualitative examples, are reported in comparison with alternative approaches.\n\n* The technique described for piping specific samples down specific low-rank adaptor chutes is relatively easy to understand and yet reportedly results in competitive restoration accuracy for investigated datasets. \n\n* Nascent investigations into mixed-degradation image restoration problems provide a promising seed to be followed.\n\n* The writing is of a reasonable standard.\n\n* The key idea of leveraging pretrained VLM features (and specifically CLIP) for the task of image restoration from multiple degradations, pre-dates the current submission [R1]. While authors clearly go to some length to highlight their alternative CLIP-based scheme, which amounts to envoking specific (pre-existing [R2]) low-rank adaptors, the core technical contributions here can be regarded as somewhat limited. \n\n* The phrase 'Universal Image Restoration' may not be a sufficiently accurate (or modest) description for the proposed method. The submission collates ten different image restoration tasks which, despite vague statements in the abstract, remains a 'multi-task' not a 'universal' setup. Samples for all ten degradation tasks are shared between train and test (Sec. A.1) and individual task adaptors appear to be trained independently on task-specific datasets (L188--196). Generalisation ability to previously unseen degradations is also not considered. Suggest method description requires reworking.\n\n* The claim that multi-task learning (MTL) frameworks, designed to handle image restoration for multiple degradations, share all parameters across different degradations (L029) is incomplete and somewhat misleading. Several existing MTL works (eg. [R3,R4]) make use of both shared and task-specific parameter subsets for multiple image restoration tasks. Indeed 'which proportion of parameters should be shared and which should be task specific' can be considered a fundamental (and long standing) MTL question. The idea of benefiting from commonalities between image restoration tasks is well understood and my concern is that this casts doubt on a core premise of the submission. \n\n\nReferences\n\nR1. Controlling Vision-Language Models for Multi-Task Image Restoration. ICLR 2024.\n\nR2. LoRA: Low-rank adaptation of large language models. ICLR 2022.\n\nR3. All in One Bad Weather Removal using Architectural Search. CVPR 2020.\n\nR4. Pre-Trained Image Processing Transformer. CVPR 2021.\n\nMinor:\n\nL076: 'draining' --> 'deraining'\n\nL099: 'mim' --> 'min'\n\nL238: 'aspects' --> 'aspects.'\n\n> 'for mixed degradations, a larger K value is required to handle the more complex situtation' (L264). \n\nCan additional results be provided for alternative hyperparameter settings (eg. K=1 and K=10) in Tab.2, towards evidencing this claim?" }, { "confidence": 5, "rating": 6, "review_id": "fTzFswm6lA", "review_text": "The paper proposes universal image restoration framework using multiple low-rank adapters that learns task specific weights from to perform multi-domain transfer learning. the proposed method leverages the pre-trained generative model weights as the shared component and adapts it task specific low-rank adapters. At each layer in the restoration pipeline the proposed method uses the degradation similarity to combine LoRA adapters outputs, this enables the proposed to handle for mixed degradation restoration.\n\n- The paper proposes LoRA adapters to learn task specific weights and proposes a strategy to combine the adapter outputs using degradation similarity measure\n- extensive experiments are performed showing the proposed strategy works better than random and average in table 3.\n- extensive experiments are performed to show the proposed methods performance against the sota methods in table 1 for mutliple degradation task.\n- Extensive experiments are performaed showing impact of LoRA rank and prediction accuracy\n\n- In table of the paper authors compared proposed method against sota on REDS and LOLBlur datasets, both these datasets have mixed degradations of blur, jpeg compression, noise, and low light. Although these comparisons performed on mixed degradations, it would be helpful to how the proposed method performs on mixed weather conditioned images (MID6), which is comparatively challenging than REDS and LOLBlur datasets. \nMID6: Multimodal Prompt Perceiver: Empower Adaptiveness, Generalizability and Fidelity for All-in-One Image Restoration, CVPR, 2024.\n\n- Can authors confirm, whether network re-trained seperately for each experiment in table-1, and table-2 separately, i.e. table-1 and table-2 trained network weights for proposed method are different.\n\n- Can the proposed network handle unknown degradation present in the input degradation image\n- From the table-3, it is evident that Top-1, and top-2 are almost same performance as All, can authors coment on this, this makes wonder if the input image has only one degradation as dominating for this experiment. Can authors show this experiment on different dataset like MID6" }, { "confidence": 5, "rating": 4, "review_id": "sgRkeGGzbD", "review_text": "This paper presents a framework to improve image restoration across various degradation types using Low-Rank Adapters (LoRA). The proposed method adapts a pre-trained generative model to each degradation type. It performs a weighted sum of the output of adapted models using the estimated degradation of input images. The proposed method performs impressive results in restoration accuracy and resources.\n\nThe proposed method is interesting and reasonable.\nExperimental results support this paper's contributions and the proposed method's effectiveness.\n\nIn Table 3, the 'Top-1' strategy performs almost the same as the 'All' strategy, which limits the motivation of the weighted sum of the adapted models.\nTable 6 presents the restoration performance comparisons for each degradation. The proposed method underperforms previous works in significant degradation types such as blurry, low-light, raindrop, and rainy.\nThe average scores might mislead the evaluation performances.\n\nHow about comparing the proposed method with the following paper?\nSelective Hourglass Mapping for Universal Image Restoration Based on Diffusion Model, CVPR 2024" } ]
zBMKodNgKX
FedNE: Surrogate-Assisted Federated Neighbor Embedding for Dimensionality Reduction
Federated learning (FL) has rapidly evolved as a promising paradigm that enables collaborative model training across distributed participants without exchanging their local data. Despite its broad applications in fields such as computer vision, graph learning, and natural language processing, the development of a data projection model that can be effectively used to visualize data in the context of FL is crucial yet remains heavily under-explored. Neighbor embedding (NE) is an essential technique for visualizing complex high-dimensional data, but collaboratively learning a joint NE model is difficult. The key challenge lies in the objective function, as effective visualization algorithms like NE require computing loss functions among pairs of data. In this paper, we introduce \textsc{FedNE}, a novel approach that integrates the \textsc{FedAvg} framework with the contrastive NE technique, without any requirements of shareable data. To address the lack of inter-client repulsion which is crucial for the alignment in the global embedding space, we develop a surrogate loss function that each client learns and shares with each other. Additionally, we propose a data-mixing strategy to augment the local data, aiming to relax the problems of invisible neighbors and false neighbors constructed by the local $k$NN graphs. We conduct comprehensive experiments on both synthetic and real-world datasets. The results demonstrate that our \textsc{FedNE} can effectively preserve the neighborhood data structures and enhance the alignment in the global embedding space compared to several baseline methods.
https://openreview.net/pdf/713ead3a7d3c84218a49bae4d46cdf7a3a34d042.pdf
[ { "confidence": 4, "rating": 5, "review_id": "7S1j7GnnvY", "review_text": "The paper introduces a novel approach to address the challenge of collaboratively visualizing high-dimensional data in a federated learning (FL) environment. The proposed method, FEDNE, integrates the FEDAVG framework with contrastive neighbor embedding (NE) techniques, aiming to preserve data privacy while ensuring effective data visualization. By employing a surrogate loss function and an intra-client data mixing strategy, FEDNE seeks to enhance the alignment and preservation of neighborhood structures in the global embedding space. The paper includes comprehensive experiments on both synthetic and real-world datasets, demonstrating the effectiveness of FEDNE in outperforming several baseline methods in terms of neighborhood data structure preservation and clustering.\n\n1. FEDNE introduces a novel integration of FEDAVG with contrastive NE techniques, addressing the unique challenges of pairwise data relationships in federated learning environments without requiring data sharing.\n2. The intra-client data mixing strategy effectively enhances local data diversity, mitigating the limitations of biased local kNN graphs and ensuring better neighborhood representation.\n3. The paper provides a thorough evaluation of FEDNE using various datasets and metrics, showcasing its superior performance compared to baseline methods in preserving neighborhood structures and clustering.\n\n1.\tWhile the authors mention that FEDNE introduces only 35% more GPU time compared to FEDAVG, the overall complexity and scalability in a more extensive, real-world setting are not fully addressed. The authors should further investigate how FEDNE scales with a significantly larger number of clients and more complex datasets or models.\n2.\tThe paper proposes intra-client data mixing as a solution to the bias in local kNN graphs. However, this approach might not entirely mitigate the issue of incorrect neighbor connections, especially in highly imbalanced datasets. More detailed comparisons with alternative methods or further enhancements could provide a more robust solution.\n3.\tThe focus is primarily on dimensionality reduction. The validation results are performed only on the vision classification tasks. Extending the discussions and analyses to include applications in other domains could be beneficial.\n\n1.\tCould you provide more details on the process of training the surrogate models? Specifically, how do you ensure that these models effectively capture the repulsive forces between dissimilar data points across different clients?\n2.\tNon-IID data is a common challenge in federated learning. How does FEDNE handle extreme cases of non-IID data distribution? Have you considered any additional mechanisms to ensure robustness in such scenarios?\n3.\tHow sensitive is FEDNE to the choice of hyperparameters, such as the step size for grid sampling, the number of neighbors in kNN, and the weight in intra-client data mixing? Have you performed any sensitivity analysis?" }, { "confidence": 5, "rating": 3, "review_id": "3iE9Fc2nID", "review_text": "The paper \"FEDNE: Surrogate-Assisted Federated Neighbor Embedding for Privacy-Preserving Dimensionality Reduction\" presents a method for visualizing high-dimensional data while maintaining privacy without requiring any shareable reference data. \n\nFederated Neighbor Embedding (FEDNE): A framework combining federated averaging (FEDAVG) with contrastive neighbor embedding (NE) to create a joint NE model across multiple clients without compromising data privacy.\n\nSurrogate Loss Function: An innovative loss function to enhance inter-client repulsion in the global embedding space, ensuring better separation of data points from different clients while preserving local data structures.\n\nData-Mixing Strategy: A technique to counter issues like invisible and false neighbors in local k-nearest neighbor (kNN) graphs by mixing data from various clients during training, thus improving the quality of the learned embeddings.\n\nWell-Presented: The paper is clearly and coherently written, making it easy to follow.\nNovel Approach: The study addresses an important problem with a novel approach, combining federated learning with neighbor embedding techniques.\n\nPrivacy Concerns: While the approach is innovative, the paper does not sufficiently address privacy concerns. It lacks experiments and guarantees demonstrating the privacy preservation of the FedNE approach.\n\nComputational Inefficiency: The method appears to be computationally inefficient. There are no experiments conducted on large datasets, such as those in real-world medical or other privacy-critical domains, where computational complexity could be a significant issue.\n\nInadequate Analysis of Related Work: The related works section is not thoroughly analyzed or discussed, missing critical comparisons and context necessary for a comprehensive understanding of the state of the art.\n\nThe study's applicability could be strengthened by extending beyond benchmark datasets to encompass real-world, privacy-sensitive datasets found in domains such as healthcare or finance. This expansion would provide a more robust demonstration of the method's practical relevance and effectiveness. \n\nAdditionally, addressing pairwise issues associated with attraction terms is essential for improving the preservation of neighborhood structures and enhancing clustering quality. \n\nFurthermore, it is crucial to conduct thorough analyses aimed at optimizing the computational efficiency and scalability of the algorithms, ensuring their capability to handle large-scale datasets effectively. Moreover, the method currently lacks explicit consideration of privacy guarantees. And on elucidating how privacy concerns are addressed within the framework and formalizing privacy guarantees to assure users and stakeholders.\n\nPrivacy Guarantees: The paper lacks a thorough discussion on the privacy guarantees of the proposed method, especially against adversarial attackers.The experimental evaluation focuses solely on utility results, with no evaluation of data privacy. How is privacy preservation quantified and ensured? What is the acceptable level of privacy preservation? The paper should include theoretical arguments and experiments demonstrating actual privacy management. From a privacy perspective, it would be helpful to provide guidance on the limitations of this method, particularly regarding transparency and explainability (e.g., OECD AI principle 1.3). What measures are in place to address these concerns?\n\nExperiments: No experiments are conducted on downstream tasks related to the problem, aside from analyzing structural properties? Moreover, no experiments conducted on large datasets? Lastly, how do the authors plan to tackle the problem of heavy computation? The method appears computationally intensive, which could hinder its practicality.\n\nReconstruction from Gradients: According to Zhu and Han’s study [1], model gradients can be used in some scenarios to partially reconstruct client data. How does the proposed method address this issue?\n\nThe paper claims to operate without relying on shareable reference data, yet it utilizes additional 2D data points from grid sampling to estimate the training targets via repulsion loss and additional augmented data points via interpolation for the attractive loss. Given this, how does the strategy address the significant computational burden it introduces, and is it feasible for real-world applications where computational efficiency is critical?\n\n[1] Ligeng Zhu, Zhijian Liu, and Song Han. 2019. Deep leakage from gradients. Advances in neural information processing systems 32 (2019)" }, { "confidence": 3, "rating": 5, "review_id": "vFgp7SQMqq", "review_text": "The paper presents a new federated learning approach named FEDNE for dimension reduction using contrastive neighbor embedding (NE). The key idea is the introduction of a surrogate loss function that each client learns and shares, which compensates for the lack of inter-client repulsion essential for global alignment in the embedding space. Additionally, the paper proposes a data-mixing strategy to augment local data, addressing issues of invisible and false neighbors in local kNN graphs. Comprehensive experiments demonstrate that FEDNE effectively preserves neighborhood data structures and enhances alignment in the global embedding space compared to several baseline methods.\n\n1. The studied problem is important. There could be many downstream tasks after applying federated neighbor embedding.\n\n2. Many metrics are included in the experiments to evaluate the quality of the resulting embeddings\n\n1. The paper lacks investigation on the effect of choice of hyperparameter k.\n\n2. The improvement of FEDNE is significant on some metrics (e.g., kNN) but is very limited in other metrics (e.g., conti.). The paper lacks a detailed exploration of why FEDNE produces different behavior for different metrics.\n\n3. I suggest to highlight the best results in Table 2. Currently the results of FEDNE are highlighted although it may not achieve the best performance in some cases.\n\n1. How would the parameter k affect the performance of FEDNE? How to set k for different settings?\n\n2. What are the major differences between the metrics? Why the improvement of FEDNE differ a lot across different metrics?" }, { "confidence": 3, "rating": 5, "review_id": "Zq4gzMiJys", "review_text": "This paper addresses the challenge of distributed neural embedding (NE) with a focus on privacy protection. To achieve this, the authors extend the concept of federated learning (FL) to NE. However, NE tends to diverge because FL prevents clients from accessing each other's data, leading to inconsistent feature spaces across clients. To mitigate this issue, the authors employ surrogate loss models trained locally, which are then broadcast to all other clients to serve as an anchor. The experiments show promising performance compared to existing baselines.\n\n1. The paper is well-motivated and well-written.\n2. The problem is practical and useful for many real-life applications, though scalability may be the main constraint.\n3. The idea is straightforward, and the experiments seem to verify its effectiveness.\n\n1. **Communication complexity**: If I understand correctly, every client in the proposed method must broadcast the surrogate models to all other clients. Although the surrogate models consist of only one hidden layer, this design results in a communication complexity of $\\mathcal{O}(N^2)$. As the number of clients in the system increases, the additional communication costs will rise dramatically. This might be manageable in some cross-silo settings, where only a few clients participate.\n\n2. **Straggler effect**: Following point (1), the proposed method requires communication among clients. However, clients may drop out during training. It would be insightful if the authors could analyze how missing surrogate loss models would affect overall performance.\n\n3. **Additional privacy concerns**: Sharing surrogate models introduces additional privacy risks, e.g., enabling reconstruction attacks or membership inference. While some recent work empirically shows that such private information is less leaked after distillation (e.g., [1] and [2]), the proposed method might be more vulnerable to privacy attacks without differential privacy.\n\n[1] Dong, Tian, Bo Zhao, and Lingjuan Lyu. \"Privacy for free: How does dataset condensation help privacy?.\" International Conference on Machine Learning. PMLR, 2022.\n[2] Wang, Hui-Po, et al. \"Fedlap-dp: Federated learning by sharing differentially private loss approximations,\" Proceedings on Privacy Enhancing Technologies, 2024.\n\nsee weakness." } ]
zBG7WogAvm
Amortized Bayesian Experimental Design for Decision-Making
Many critical decisions, such as personalized medical diagnoses and product pricing, are made based on insights gained from designing, observing, and analyzing a series of experiments. This highlights the crucial role of experimental design, which goes beyond merely collecting information on system parameters as in traditional Bayesian experimental design (BED), but also plays a key part in facilitating downstream decision-making. Most recent BED methods use an amortized policy network to rapidly design experiments. However, the information gathered through these methods is suboptimal for down-the-line decision-making, as the experiments are not inherently designed with downstream objectives in mind. In this paper, we present an amortized decision-aware BED framework that prioritizes maximizing downstream decision utility. We introduce a novel architecture, the Transformer Neural Decision Process (TNDP), capable of instantly proposing the next experimental design, whilst inferring the downstream decision, thus effectively amortizing both tasks within a unified workflow. We demonstrate the performance of our method across several tasks, showing that it can deliver informative designs and facilitate accurate decision-making.
https://openreview.net/pdf/67b2e48fbef5361774799536072d5907137d322c.pdf
[ { "confidence": 4, "rating": 7, "review_id": "3gJEIEzgWz", "review_text": "This paper proposes a method for decision-aware Bayesian experimental design, where the design is not optimized with respect to the most accurate posterior distribution of the latent parameters but rather with respect to the expected utility gain of the actual (down-stream) decision task.\n\nThis is an innovative paper with high practical relevance. The proposed method appears sound and the corresponding neural networks well designed to suit the goal. Despite my questions and concerns (see below), I am positive about this paper overall and eager to increase my score should my points be addressed.\n\n- The presentation of p(y_Xi | h_t) between Eq 3 and 4 is partially unclear to me. From the definition, it seems this is not actually a distribution but a set of distributions. To me, then notation p(y_Xi | h_t) appears to be quite the abuse of notation because we cannot readily read this it as a single distribution. Can you perhaps think about a different notation that makes this easier to parse and understand? Relatedly, in Equation 4, it appears that we compute an expectation over p(y_Xi | h_t). But how do we compute an expectation over a set of distributions? I think I get what the authors do and want to imply but to me this notation doesn’t help in understanding it.\n- Equation 7: It seems we approximate the predictive distribution always by a Gaussian. I mean this of course works if the true underlying function is some kind of GP, but what if the true predictive distribution is far away from Gaussian? I don’t see this choice to be discussed properly so I consider it a weakness of this paper for now.\n- The discussion of training and inference time can only be found in the appendix. Specifically, training speed seems to be substantial, which of course makes sense for an amortized method. However, I don’t see any discussion for when the training actually amortizes. That is, how many BED tasks do we need to run at minimum before the total (training + “inference”) time of the new method becomes better than those of the competing methods. More generally, I think a discussion of speed should be more prominent in the paper.\n- 6.1 toy example was hard for me to understand at first. Is this just a standard BO task to find the point where the unknown function is maximal?\n\n- In 4.1 Query set: How problematic is the fact that we randomly generate some designs from the design space. Doesn’t this mean we need a distribution over the design space? How can we obtain (or define) such a distribution in general?\n- In 4.1 Query set: You say that in the deployment phase we can obtain the optimal design by optimizing the models (which model’s?) output. How do you optimize this exactly?\n- Given that (non-decision aware) amortized BED methods exist, why are the benchmarks only comparing against non-amortized methods? I suggest to also add amortized methods to the benchmarks unless you can convince me that this is not sensible for some reason.\n- What is the scalability of the method in terms of all relevant dimensions, e.g., dimensionality of xi, y, a, etc?\n- Figure 4: you say that your method provides substantial gains, but at least on the scale in the figure, gains seem to be small. Can you clarify why you feel that the improvements are indeed “substantial gains”?\n- The method has quite a lot of components, I wonder which of the components is responsible for the improved results? For example, how relevant is it to consider non-myopic designs, i.e., how does the method perform when only trained in a myopic setup? Relatedly, are the alternative methods myopic or non-myopic?" }, { "confidence": 5, "rating": 6, "review_id": "V0WZjw3Gmy", "review_text": "The paper looks at the problem of designing Bayesian optimal experiments taking into account the downstream decision making. At the core is a Transformer Neural Decision Process (TNDP) architecture that is trained to amortise the experimental design process whilst simultaneously inferring the optimal downstream decision.\n\n- Relevant and interesting topic: Downstream decision making is what ultimately matters, so taking this into account when designing experiments to collect data can result in more cost- and sample-efficient learning. \n\n- Motivation for the paper as well as clarity of writing are excellent. Contextualisation relative to prior work can be improved as outlined in the next section.\n\n- The proposed Transformer Neural Decision Process (TNDP) architecture is tailored to the BED problem, is well-explained and adds some novelty to the architectures typically used in the field.\n\n### Sections 2.2 & 3.2 and Lindley's decision-theoretic BED [1]:\n\nMy main issue with the paper is the presentation of DUG and EDUG as novel. This framework was first formulated in [1], and is very well summarised in Section 1.3 of [2]. I strongly recommend the authors read that section, and present their Section 3.2 accordingly, acknowledging they follow Lindley, 1972. The questions/comments in the next 2 bullets are a consequence of this omission of literature.\n\n- Second paragraph of Sec 2.2: I am not sure how the predictive distribution $p(y | \\xi, h_t)$ is defined. I would think it is $p(y | \\xi, h_t) = \\mathbb{E}_{p(\\theta |h_t)} [p(y | \\xi, \\theta)]$. Whether or not you compute/approximate the posterior $p(\\theta |h_t)$, or seek to directly approximate $p(y | \\xi, h_t)$ (eg variationally), I think you should explicitly define what this quantity is. \n\n- I am not sure how the utility $u(y_\\Xi, a)$ is defined. From a Bayesian decision-theoretic approach, the utility has to depend on the state of the world $\\theta$, as well as the experiments $\\xi$ you are going to perform (which I guess is implicit in $y_\\Xi$). So shouldn't the \"lowest level\" utility be a function $u(y, \\theta, \\xi, a)$, which you then integrate over $p(\\theta|h_t)$, to obtain $u(y, \\xi, a) = \\mathbb{E}_{p(\\theta|h_t)} [u(y, \\theta, \\xi, a)]$, then take $\\max$ wrt $a$, and finally integrate over the predictive $p(y |\\xi, h_t)$ to obtain an expected utility, which can then act a design ranking criterion, as you do in Eq 4 and (cf Eq 2 in [2]).\n\n### Related work: \n\nFor a field that has such rich history and renewed interest from the ML community recently, the related works section is quite short and sparse on citations. Some areas that are missing include:\n- Decision-theoretic BED: as previously discussed, the general framework of utility-based BED was developed by Lindley (1972).\n- BED + RL: this work touches on some aspects of RL; It might be good to discuss relations recent works in the intersection such as [5] and [6] (in addition to those mentioned)\n- Decision-theoretic approaches in related fields such as Bayesian Optimisation, e.g. [7], [8]\n- Finally, I'm not too familiar with this line of literature, but more recent work around decision transformers---is there any relation between TNDP with works like [9] and [10]?\n\n### Other:\n\n- Line 6: \"most recent BED methods use amortised inference with a policy network\" is not quite correct in the sense that no \"real inference\" (posterior updates on the parameters $\\theta$) are performed. \n- Line 179: \"to ensure the framework satisfied the permutation invariance property of sequential BED\": not all BED problems are permutation invariant. For example, designing experiments for time series models (e.g SIR in [3] and [4]), permutation invariance does not hold. This aspect has been discussed in e.g. Section 3.3 of [3].\n- Assuming you do want a permutation invariant architecture (most design problems fall in that category): by conditioning on $t$ as part of the global information (GI) set, I think you actually break that invariance. This is because encoding $(\\xi, y)$ at time $t$ or at time $s$ will give you different outputs. As far as I can tell from Fig2b), $D_c$ does attend to GI. Could you please explain if that's the case or I have misunderstood something?\n\n-----\n#### References\n\n[1] Lindley, D. V. (1972). Bayesian statistics: A review. Society for industrial and applied mathematics.\n\n[2] Chaloner, K., & Verdinelli, I. (1995). Bayesian experimental design: A review. Statistical science, 273-304.\n\n[3] Ivanova, D. R., Foster, A., Kleinegesse, S., Gutmann, M. U., & Rainforth, T. (2021). Implicit deep adaptive design: Policy-based experimental design without likelihoods. Advances in neural information processing systems, 34, 25785-25798.\n\n[4] Kleinegesse, S., & Gutmann, M. U. (2019, April). Efficient Bayesian experimental design for implicit models. In The 22nd International Conference on Artificial Intelligence and Statistics (pp. 476-485). PMLR.\n\n[5] Mehta, V., Paria, B., Schneider, J., Ermon, S., & Neiswanger, W. (2021). An experimental design perspective on model-based reinforcement learning. arXiv preprint arXiv:2112.05244.\n\n[6] Mehta, V., Char, I., Abbate, J., Conlin, R., Boyer, M., Ermon, S., ... & Neiswanger, W. (2022). Exploration via planning for information about the optimal trajectory. Advances in Neural Information Processing Systems, 35, 28761-28775.\n\n[7] Neiswanger, W., Yu, L., Zhao, S., Meng, C., & Ermon, S. (2022). Generalizing Bayesian optimization with decision-theoretic entropies. Advances in Neural Information Processing Systems, 35, 21016-21029.\n\n[8] Ivanova, D. R., Jennings, J., Rainforth, T., Zhang, C., & Foster, A. (2023, July). CO-BED: information-theoretic contextual optimization via Bayesian experimental design. In International Conference on Machine Learning (pp. 14445-14464). PMLR.\n\n[9] Chen, L., Lu, K., Rajeswaran, A., Lee, K., Grover, A., Laskin, M., ... & Mordatch, I. (2021). Decision transformer: Reinforcement learning via sequence modeling. Advances in neural information processing systems, 34, 15084-15097.\n\n[10] Zheng, Q., Zhang, A., & Grover, A. (2022, June). Online decision transformer. In international conference on machine learning (pp. 27042-27059). PMLR.\n\nIn addition to the questions raised in the Weaknesses section:\n\n1. I think the main contribution of the paper is the TNDP architecture. Have the authors performed any ablations, e.g. not sharing the same embedding block? Not including $t$ in the GI?\n2. In the decision-aware AL experiment: why does the random baseline perform as good as all the other ones?\n3. Could you give guidance on choosing utility functions? For the experiments in the paper it is quite straightforward to define them, but in real-world practical application that might not be the case. This is the reason why the mutual information has become the de facto standard utility in BED." }, { "confidence": 4, "rating": 6, "review_id": "VALImQdKjt", "review_text": "The paper proposes a transformer-based architecture for jointly sampling designs and decisions in Bayesian Experiment Design (BED) using a forward-looking criterion. The latter considers the improvement in maximum expected utility brought about by a new design-outcome pair, where the expectation is taken with respect to the predictive distribution of the model. The main innovation of the paper lies in the coupling between information gain and utility maximization in an amortized, transformer-based framework in the spirit of attentive neural processes. The performance of the new architecture is evaluated on a toy regression task and two more representative models, exhibiting stable performance gains over contender methods.\n\n- The paper is clearly written, the ideas and formulations are stringent and well-justified, overall making it easy to follow and a pleasure to read (with the exception of Section 4.1, see below).\n\n- The proposed architecture and training objectives are novel and seem to unlock both qualitative and quantitative improvements over existing methods. \n\n- The results indicate superior and stable performance of the proposed architecture on two interesting tasks, along a toy 1D GP model which seems to be a standard proof-of-concept task in the neural process (NP) literature.\n\n- Some notational confusion can be avoided by consistently using the notation $a_{1:t}$ to denote a sequence of $t$ elements and $a_t$ to denote the $t$-th element in the sequence. Currently, $h_t$ denotes a sequence, but, e.g., $y_t$ denotes an element, and then again $\\theta_{1:L}$ also represents a sequence. Also, P4L126 is an abuse of notation with slightly confusing wording, such as “the predictive posterior distribution over all possible designs”, whereas the predictive distribution(s) are over future \\textit{outcomes}. This is in no way different than the posterior predictive in Bayesian (non-linear or linear) regression, where the posterior predictive is conditioned on the training data set and the set of (unlabeled) predictors available at test time. Hence, I struggle to understand the need for the convoluted abuse of notation, but I may be missing something. Also section 4.1 suddenly starts using bold font for vectors, which was not the case in the preceding sections. \n\n- Figure 2 is not particularly informative for the data flow, as it does not clearly communicate weight sharing, input-output operations and dependencies (left panel); the right panel comes out of the blue and is not well explained (i.e., what are the elements on the “left” and on the “top”); the description below on P6 does indeed disambiguate the idea behind the construction of the masks, but I believe it is best when figures support and enhance the text and not vice versa.\n\n- Overall, I feel that Section 4.1 is the weakest link in the paper, and I believe the authors can think about optimizing the ratio of details dispersed between the main text and the appendix. For instance, there is no need to reiterate established transformer-based computations, but it could be helpful to explicate the construction of the masks, the representation types (e.g., vectors, sequences of vectors,...?), and the precise partitioning of the components into keys, queries, and values.\n\n- According to my understanding, none of the contender methods in the experiments is an amortized method. Wouldn’t some of the existing amortized BED methods (e.g., as highlighted in the Related Work) make for suitable benchmarks, despite not optimizing for future decisions?\n\n- The topic of model misspecification is never mentioned in the paper, even though the comprehensive review paper [1] states that it remains a major unsolved issue in BED and in amortized Bayesian inference more generally [2]. I believe this should also be acknowledged in the current paper and the authors can potentially think about quantifying the impact of model misspecification in a small ablation study in the final version of the manuscript.\n \nI am happy to discuss these points with the authors and increase my score if they are addressed / clarified.\n\n[1] Rainforth, T., Foster, A., Ivanova, D. R., and Bickford Smith, F. (2024). Modern Bayesian\n429 experimental design. Statistical Science, 39(1):100–114.\n\n[2] Schmitt, M., Bürkner, P. C., Köthe, U., & Radev, S. T. (2024). Detecting Model Misspecification in Amortized Bayesian Inference with Neural Networks: An Extended Investigation. arXiv preprint arXiv:2406.03154.\n\n- Perhaps section 2 can be organized in a way to avoid singleton nested subsection (i.e., 2.1.1)?\n\n- P4L130: Isn’t there also an assumption that decision are optimal only if there is no model misspecification (i.e., that we are working with the posterior of the “true” model)? \n\n- Are there any practical disadvantages of assuming a diagonal Gaussian predictive distribution? Can complex models induce multimodal or highly correlated predictive distributions that?" }, { "confidence": 4, "rating": 6, "review_id": "LaytIYhfLY", "review_text": "This paper tackles an important problem of designing experiments in a way that directly optimizes downstream decision-making tasks, going beyond just inferring parameters of interest. The authors make several valuable contributions:\n\n1. They introduce the concept of Decision Utility Gain (DUG) to quantify how much an experimental design improves the expected utility of the downstream decision. \n\n2. They propose a novel neural architecture called the Transformer Neural Decision Process (TNDP) that amortizes both the experimental design selection and the approximation of the predictive distribution needed for decision-making. This unified amortized framework is a key innovation.\n\n3. The authors develop a non-myopic training objective that looks beyond just the immediate decision utility to account for effects of the current design on future rewards.\n\n4. Empirically, they demonstrate TNDP's effectiveness over traditional methods on various tasks like active learning, hyperparameter optimization, showing it can find informative designs and make accurate downstream decisions.\n\nIn summary, this work makes valuable conceptual and technical contributions to the area of Bayesian experimental design by pioneering decision-aware amortized methods. It opens up new research directions for further enhancing real-world decision-making via optimized experimental data acquisition.\n\n- The paper presents a novel problem formulation by introducing the concept of Decision Utility Gain (DUG), which shifts the focus of experimental design from reducing parameter uncertainty to directly optimizing downstream decision utility. This new perspective is a creative departure from traditional Bayesian experimental design (BED) approaches.\n- The application of amortized inference techniques to decision-aware experimental design can be considered an original contribution, as it represents a new domain for these methods beyond traditional BED.\n- The empirical evaluation is comprehensive, spanning diverse tasks such as active learning, hyperparameter optimization, and synthetic regression problems. The results demonstrate the consistent superiority of TNDP over traditional methods.\n\n- The authors could provide a more rigorous analysis of the properties and characteristics of the TNDP architecture, such as its convergence behavior, sample complexity, and theoretical guarantees (if any) regarding the quality of the proposed designs and decisions.\n- The experimental evaluation, while comprehensive, focuses primarily on synthetic and benchmark datasets. While these serve as important proof-of-concept demonstrations, the paper could benefit from including real-world case studies or applications to further validate the practical utility of the proposed framework.\n- While the amortized nature of TNDP is highlighted as a key advantage, the paper could provide a more detailed analysis of the computational complexity and scalability of the proposed approach. This analysis could include factors such as the training time required for different problem sizes, the memory footprint, and the scalability of the attention mechanisms used in the Transformer architecture.\n\n- Can the authors provide a more in-depth theoretical analysis of the Decision Utility Gain (DUG) concept, including its relationship with existing concepts like Value of Information (VoI) or Information Gain (IG)?\n\n- Have the authors explored the sensitivity of TNDP's performance to different hyperparameter choices, such as the discount factor α used in the non-myopic objective? If so, can they share insights into this analysis?" } ]
zAuerb1KGx
Multi-Label Learning with Stronger Consistency Guarantees
We present a detailed study of surrogate losses and algorithms for multi-label learning, supported by $H$-consistency bounds. We first show that, for the simplest form of multi-label loss (the popular Hamming loss), the well-known consistent binary relevance surrogate suffers from a sub-optimal dependency on the number of labels in terms of $H$-consistency bounds, when using smooth losses such as logistic losses. Furthermore, this loss function fails to account for label correlations. To address these drawbacks, we introduce a novel surrogate loss, *multi-label logistic loss*, that accounts for label correlations and benefits from label-independent $H$-consistency bounds. We then broaden our analysis to cover a more extensive family of multi-label losses, including all common ones and a new extension defined based on linear-fractional functions with respect to the confusion matrix. We also extend our multi-label logistic losses to more comprehensive multi-label comp-sum losses, adapting comp-sum losses from standard classification to the multi-label learning. We prove that this family of surrogate losses benefits from $H$-consistency bounds, and thus Bayes-consistency, across any general multi-label loss. Our work thus proposes a unified surrogate loss framework benefiting from strong consistency guarantees for any multi-label loss, significantly expanding upon previous work which only established Bayes-consistency and for specific loss functions. Additionally, we adapt constrained losses from standard classification to multi-label constrained losses in a similar way, which also benefit from $H$-consistency bounds and thus Bayes-consistency for any multi-label loss. We further describe efficient gradient computation algorithms for minimizing the multi-label logistic loss.
https://openreview.net/pdf/238907b99661ce02cecd832f90a1e415af69d730.pdf
[ { "confidence": 1, "rating": 5, "review_id": "RCZyMlSv6n", "review_text": "This paper proposes an improved approach to multi-label learning using $\\mathcal{H}$-consistency bounds by introducing the multi-label logistic loss to effectively handle label correlations. It extends to various multi-label losses, ensuring Bayes-consistency across diverse settings, and includes efficient gradient computation algorithms for minimizing the proposed loss function. This work offers a unified framework with robust consistency guarantees, advancing beyond traditional methods in multi-label learning.\n\n- Introducing the multi-label logistic loss, which effectively addresses label correlations often overlooked by traditional binary relevance surrogates under Hamming loss.\n\n- The paper establishes $\\mathcal{H}$-consistency bounds for a wide range of multi-label losses, ensuring Bayes-consistency across diverse multi-label learning scenarios. This extends beyond previous research that primarily focused on specific loss functions.\n\n- It offers a unified framework that accommodates various multi-label losses, including novel extensions and adaptations from standard classification. This is supported by efficient gradient computation algorithms specifically designed for minimizing the proposed multi-label logistic loss.\n\n- The motivation and background of this paper lack clear logic and hierarchy. It is suggested to first outline the shortcomings of existing methods and then clearly present the research questions addressed in this paper.\n\nPlease check the weaknesses." }, { "confidence": 4, "rating": 7, "review_id": "FXaut5Lf5V", "review_text": "The paper explores surrogate losses and algorithms for multi-label learning, focusing on \\( \\mathcal{H} \\)-consistency bounds. It identifies the limitations of Hamming loss and introduces a new multi-label logistic loss that accounts for label correlations. The study extends this to a broader family of multi-label losses and adapts comp-sum losses from standard classification to multi-label learning. The authors propose a unified framework providing strong consistency guarantees for multi-label losses and describe efficient gradient computation methods for minimizing these losses.\n\n1. The authors conduct a detailed analysis of the popular Hamming loss in multi-label learning when using smooth losses. They identify its sub-optimal dependency on the number of labels and its failure to account for label correlations, providing valuable insights into the limitations of existing loss functions. \n1. The authors introduce an improvement by presenting a novel surrogate loss, the multi-label logistic loss, which accounts for label correlations and benefits from label-independent \\( \\mathcal{H} \\)-consistency bounds. This innovation addresses the identified drawbacks of existing loss functions and broadens the analysis to include a more extensive family of multi-label losses, including a new extension based on linear-fractional functions related to the confusion matrix.\n1. The authors extend their work by adapting multi-label logistic losses to more comprehensive multi-label comp-sum losses. By demonstrating that this family of surrogate losses benefits from \\( \\mathcal{H} \\)-consistency bounds and Bayes-consistency across any general multi-label loss, they propose a unified surrogate loss framework. This expands upon previous work that only established consistency for specific loss functions, showcasing the applicability of their approach.\n1. The authors' writing is clear and well-structured, with each theoretical assumption and conclusion articulated distinctly.\n\n1. In section 4, although the excellent properties of the proposed multi-label logistic loss are proven, providing a detailed explanation of each component of this loss would further enhance the reader's understanding of its superiority.\n2. If the advantages of this loss could be demonstrated through experimental validation, it would be more intuitive for readers.\n\n1. Could the authors elaborate on the individual components of the multi-label logistic loss and how each contributes to its overall effectiveness?\n2. Given the detailed nature of this loss function, what is the computational complexity associated with implementing the multi-label logistic loss compared to other traditional loss functions? It is better to be able to verify the advantages and complexity of the algorithm through experiments." }, { "confidence": 4, "rating": 8, "review_id": "GCv8AdRJQL", "review_text": "The authors study surrogate losses and algorithms for multi-label learning via H-consistency bounds and introduce a novel surrogate loss, multi-label logistic loss in this paper. By broadening the H-consistency bounds analyses to more general multi-label losses and extending to multi-label comp-sum losses, the authors provide a unified surrogate loss framework for H-consistency.\n\n1. This paper is well-written and easy to follow.\n2. The authors make comprehensive reviews of related works, including their pros and cons.\n3. The authors provide rigorous theoretical analyses of the limitations of existing binary relevance loss, the H-consistency of the proposed multi-label logistic loss, and the extensions to more general multi-label losses. The theoretical contribution is important for multi-label learning.\n4. The authors demonstrate the efficient computation of the gradient for the proposed multi-label logistic loss and conduct time complexity analyses.\n\n1. I understand that this is a theoretical work, and experiments of empirical evaluations are not its focus. However, adding experiments to compare the proposed loss with commonly used multi-label losses on standard datasets would make the paper more comprehensive and appealing. Besides, it can also verify whether the proposed loss is effective in practice.\n2. There is a typo in line 300.($1-\\bar{L}_{ham}(\\cdot, y)$).\n\nSee above weaknesses." }, { "confidence": 2, "rating": 5, "review_id": "DaMkrUgFqM", "review_text": "The paper derives H-consistency bounds for binary-relevance style surrogate losses, as well as a new surrogate, for mutli-label learning problems, showing that the proposed multi-label logistic loss whose upper-bound on the Hamming loss is independent of the number of labels.\n\nThe $H$-consistency bounds provided in the paper are more informative than existing Bayes-consistency results, as they hold not just in the infinite limit.\n\nThe novel multi-label logistic loss allows upper-bounds that do not depend on the number of labels.\n\nThe paper does not provide any experiments. While this is OK for a theory paper, it does mean that the question of whether the new surrogate works better in practice remains unanswered (which should be reflected in the conclusion section, at least), for two reasons:\na) all the theory provides are upper-bounds, which might not be indicative of actual performance\nb) while the theory provides better guarantees for the task loss if the surrogate is reduced to the level $\\epsilon$, it might be that reducing the new surrogate is just much more difficult than optimizing binary relevance. In particular, if the computational cost for reducing the multi-label logistic loss to the same level $\\epsilon$ as binary relevance is larger by at least $\\sqrt{l}$, then, normalized for compute, the advantage of the new surrogate vanishes.\n\nIt is claimed that the gradient of the multi-label logistic loss can be computed efficiently, yet the presented formulas still contain sums over the entire $2^l$ entries of the label space. Even if they can be precomputed once, already at moderate label space sizes of l ~ 100 would these quantities be intractable.\n\nIt is annoying that most equations are unnumbered. Even if they are not referred to in the paper, your readers and reviewers might want to reference them.\n\nthe equation after l. 328 switches between $\\mathbf{\\mathsf{y}}'$ and $y'$; and $y''$ changes to $y$\n\nl. 114: I'm not sure what the point here is of introducing the threshold $t$, if it is set to $0$ in the same sentence? Couldn't $t$ be simply absorbed into $h$?\n\nl. 178-180; 208: Arguably, completeness does _not_ hold in practice, because there is some form of upper-bound (e.g., weights representable in the given floating-point format)\n\nl. 231. Binary relevance is not just Bayes-consistent w.r.t. the Hamming-loss, but also works for precision-at-$k$\n\nIn the equation after line 542, I think $\\bar{L}$ should be $\\bar{L}_\\mathrm{ham}$? \n\nl. 503: I think $q$ should be $q_i$, and there is a weird subscript on that line.\n\nl. 174 consist -> consisting\n\nIn several places, the paper talks about label correlations, in particular, it claims an advantage of the new surrogate is that it takes into account label correlations. However, it is never specified what exactly that means (conditional correlations, i.e., dependent on the specific instance $x$, or marginal correlations). Further, for many loss functions (such as Hamming-loss), the Bayes-optimal prediction is a function of purely the label marginals $P[Y_i|X]$, so it is not clear to me whether taking into account label correlations actually is an advantage in those cases.\n\nThe paper mentions the decision-theoretic and the empirical-utility framework, but then seems to consider only loss functions that are defined on the level of a single instance. Aren't the two settings that same in that case?\n\nl. 525: Is the argmin unique? Are we breaking ties arbitrarily?\n\nDespite being part of the theorem, $\\mathcal{M}$ does not appear anywhere in the proof of 3.1\n\nI tried going through the proof of 4.1, but I'm not quite sure how to construct the hypothesis $h'$ with that realized $s^{\\mu}$, not do I see why the minimum is achieved for $s_h = s_y$, unless $c_h = c_y$." } ]
zApFYcLg6K
On Differentially Private U Statistics
We consider the problem of privately estimating a parameter $\mathbb{E}[h(X_1,\dots,X_k)]$, where $X_1$, $X_2$, $\dots$, $X_k$ are i.i.d. data from some distribution and $h$ is a permutation-invariant function. Without privacy constraints, the standard estimators for this task are U-statistics, which commonly arise in a wide range of problems, including nonparametric signed rank tests, symmetry testing, uniformity testing, and subgraph counts in random networks, and are the unique minimum variance unbiased estimators under mild conditions. Despite the recent outpouring of interest in private mean estimation, privatizing U-statistics has received little attention. While existing private mean estimation algorithms can be applied in a black-box manner to obtain confidence intervals, we show that they can lead to suboptimal private error, e.g., constant-factor inflation in the leading term, or even $\Theta(1/n)$ rather than $O(1/n^2)$ in degenerate settings. To remedy this, we propose a new thresholding-based approach that reweights different subsets of the data using _local Hájek projections_. This leads to nearly optimal private error for non-degenerate U-statistics and a strong indication of near-optimality for degenerate U-statistics.
https://openreview.net/pdf/a34864164d8a57ea128abf709b53e25e642b7a1b.pdf
[ { "confidence": 3, "rating": 7, "review_id": "R1Rnu9t9Df", "review_text": "This paper addresses the problem of estimating U statistics under central differential privacy. U statistics are established minimum variance unbiased estimators for estimable parameters in the form $\\mathbb{E} h (X_1, ..., X_k)$, where $h$ is a kernel and for all $i$ $X_i$ is i.i.d. from some underlying distribution. In other words, U statistics estimate averages of kernels applied to subsets of the data of degree (size) $k$. This type of problem arises in multiple statistical tests such as goodness-of-fit tests and Pearsons's chi-squared tests, uniformity testing, subsampling and other scenarios. While many methods have been studied for differentially private mean estimation, the research on private U statistics is in its early stage and has so far mainly focused on local differential privacy models and discrete data. This paper seeks to provide differentially private U statistics estimators achieving nearly optimal private error for both the case of non-degenerate kernels and degenerate kernels.\n\nThe main contributions of this paper are: i) it derives the lower bound for private algorithms for the non-degenerate kernel case (Theorem 1); ii) it finds that applying off-the-shelf private mean estimation procedures to U statistics estimation yields suboptimal error; iii) it proposes an algorithm that achieves nearly optimal private error in the non-degenerate kernel case, and evidence of near optimality for bounded degenerate kernels. \n\nThe proposed algorithm (Algorithm 1) is based on representing U statistics via the Hájek projection, and leverages the fact that local Hájek projections enjoy strong concentration around the conditional mean. Basically, if all local Hájek projections $\\hat h(i)$ are within a certain threshold distance from the pre-computed empirical mean $A_n$, the output $\\tilde{A}_n$ on line 14 is going to be equal to $A_n$; if not, for every subset $S$ containing a bad index, $h(S)$ is replaced by a weighted combination of $h(S)$ and $A_n$. The choice of threshold $\\xi$ ensures $L = 1$ with high probability, maintaining a balance between excluding bad data and preserving good data, while also keeping the sensitivity of the final adjusted mean $\\tilde{A}_n$ small​, which is crucial for differential privacy. A lower bound for sub-Gaussian non-degenerate kernels is provided (Corollary 1) and Algorithm 1 is proven to match this lower bound. It is also shown that Algorithm 1 matches the lower bound for bounded degenerate kernels (Corollary 2).\n\nThe paper discusses a wide range of applications of the proposed method to uniformity testing, goodness-of-fit tests, Pearson’s chi-squared tests, symmetry testing, and sparse graph statistics.\n\nThis paper is clear, well-structured and provides rigorous derivations and proofs to back the proposed methods and claims. \n\nThe paper addresses a notable gap in current differential privacy research, which is U statistics under differential privacy. The authors derive lower bounds for both the private sub-Gaussian non-degenerate kernel case and the private bounded degenerate kernel case. These bounds support the proofs that the proposed method achieves i) near-optimality for sub-Gaussian non-degenerate kernels and ii) strong evidence of near optimality for the bounded degenerate case. These results are valuable in the context of the differential privacy research community.\n\nThe contributions are clearly highlighted.\n\nI appreciate the effort by the authors to make the results as clear as possible for the reader. In particular, the table summary of the error of different private methods in Table 1 makes it easy to understand the relative error performance of different methods at glance; similarly, in a couple of instances the authors provide key intuitions behind the proposed methods, which helps break down important technical steps that are fundamental to the proposed method. The notation is also clear and consistent.\n\nThe proposed method has wide applicability, as demonstrated in the Applications section, where the authors describe the usefulness of the method spanning multiple statistical tests and sparse graph statistics. \n\nComputational complexity and alternative computationally efficient approximations of U statistics are also discussed.\n\nExtensive proofs and supporting technical derivations are provided in Appendix, although I did not review it in detail due to time constraints.\n\nI didn’t find any significant weaknesses in this paper. The paper is highly technical and notation-heavy, but as I described in the previous section, it still reads very clearly. A few of minor notes:\n\n- Since [53] appears to be foundational to the development of the main proposed method, it is worth dedicating a short description of it and/or specification of which ideas in [53] have been built upon.\n- Theorem 2 is not followed by a pointer to its proof in Appendix. Please reference the proof in Appendix.\n- Limitations of the proposed methods are briefly mentioned throughout the paper, but I would prefer if they were addressed separately in a short dedicated paragraph or subsection, making them more easily identifiable by a reader skimming through the paper.\n\nI would ask the authors to address the minor points I mentioned in \"Weaknesses\". I don't have other questions at the moment." }, { "confidence": 3, "rating": 5, "review_id": "7PeC6S6ZhW", "review_text": "The paper addresses the problem of private estimation of U-statistics. The authors propose a new thresholding-based approach using local Hájek projections to achieve nearly optimal private error in both non-degenerate and degenerate settings.\n\n1. The paper provides solid theoretical foundations, including lower bounds for the private error and theoretical guarantees for the proposed algorithm.\n2. The proposed method is applicable to a wide range of U-statistics problems, from hypothesis testing to subgraph counting in random geometric graphs.\n3. The method aims to provide private confidence intervals for U-statistics, addressing a gap in existing literature.\n\n1. The paper is difficult to read due to the heavy use of parameters and notations, many of which are not well-defined or explained, particularly in the algorithmic sections.\n2. The manuscript provides non-asymptotic results for the DP estimators, but lacks the asymptotic normality results typical for non-private version of U-statistics, which are crucial for practical applications. I think the asymptotic variance of the private U-statistics will change compared to the non-private version. More discussion on expected on this difference. \n3. To provide private confidence intervals, the variance should also be estimated privately. This aspect is not thoroughly discussed, making the testing problem in Section 5 less meaningful.\n4. There are no experimental results to demonstrate the practical performance of the proposed algorithms, which is a significant omission.\n5. The paper only consider the 1-dimensional data $X$ throughout the paper. A general discussion of d-dimensional vector are needed because it may suffer from the curse of dimensionality, which will affect the generalizability of the results.\n\n1. What is the asymptotic results of the private U-statistics?\n2. How do you ensure get DP estimators for the variance when doing inference?" }, { "confidence": 4, "rating": 6, "review_id": "490HNDim2b", "review_text": "This paper introduces a new algorithm for constructing U-statistics under central DP. Compared to the naive method, the proposed estimator exhibits lower variance. The authors also derive a lower bound for private algorithms. Several statistical applications are presented to illustrate the methodology.\n\nU-statistics are widely applied in statistical inference. The improvements in private estimation presented in this paper are useful, and the theoretical results are solid.\n\nThe calculation of the privacy budget lacks precision.\n\n1. Could the authors consider refining the computation of the privacy budget? Specifically, users may prefer a $\\epsilon$-DP method over an $\\mathcal{O}(\\epsilon)$-DP method.\n\n2. Following the first point, could the authors discuss the performance of the proposed method across different values of $\\epsilon$?" }, { "confidence": 3, "rating": 7, "review_id": "ra4UMKf4MF", "review_text": "This paper studies differentially private estimation of U-statistics (estimators for such statistics are averages of functions $h$ that depend on a number of i.i.d. samples $X_1,\\dots,X_k$). This is a generalization of the commonly studied mean estimation problem where $k=1$ and such estimators with $k>1$ are widely applied across statistics. The authors are primarily interested in cases where $h$ is a subgaussian kernel i.e. the distribution of $h(X^k)$ is subgaussian or cases where the range of $h$ is bounded (and satisfies a certain degeneracy property).\n\nThe main contributions of the paper are as follows:\n1) They first consider approaches that reduce differentially private U-statistics to differentially private mean estimation and argue that natural approaches result in estimators that are either suboptimal in either the non-private error terms or the private error-terms. The estimators they consider are a naive estimator that reduces to the i.i.d. case by computing the function $h$ on a partition of the dataset before applying a subgaussian mean estimation algorithm on the resulting sample of function values, and a more complicated estimator that generalizes the CoinPress algorithm to work with weakly dependent samples. The former has suboptimal non-private error while the latter has a suboptimal privacy term (the dependence on $k$ is suboptimal).\n\n2) They then consider a different strategy inspired by work on privately estimating the sampling probability for Erdos-Renyi graphs. This strategy exploits the concentration of the 'local Hajek projections' around the true mean. The idea is to classify coordinates into good and bad coordinates respectively based on how close their projections are to the optimal non-private statistic, and reduce the local sensitivity of the average being computed by reducing the influence of bad coordinates by reducing the weight of the corresponding terms in the average. They can then compute an appropriate smooth upper bound to the local sensitivity of this average and add less noise. They use this idea to obtain a general result for bounded kernels, and then use it to get the optimal rate for subgaussian-nondegenerate kernels, and a bound for general degenerate bounded kernels. They also provide some indication that their bound for general degenerate bounded kernels may be optimal.\n\n3) They also show that their results can be used to privatize 'subsampled' estimators with similar error rates that are computationally much more efficient. Finally, they apply these results to settings where U statistics are used such as various hypothesis testing problems.\n\n1) U-statistics are widely used across statistical testing and estimation, and have been relatively understudied in the privacy literature. This paper explores them quite generally and does a good job of suggesting problems for future work.\n\n2) They do a good job of explaining how natural extensions of traditional DP mean estimators perform sub-optimally in estimating U-statistics.\n\n3) The estimator based on local Hajek projections (and smooth sensitivity) seems quite technically novel and interesting.\n\n1) In the applications section, it would be good to discuss existing private algorithms for the corresponding tasks (if there are any) and compare the bounds that are obtained.\n\n2) In the Hajek projection algorithm, it would be nice if they explained how they build on the techniques from [Ullman Sealfon NeurIPS 2019]- which parts are borrowed from that work and which parts are new.\n\nIn equation A.41/42 is S missing from the subscript? Also what is j here? Do you mean $i^*$?" } ]
z86knmjoUq
PURE: Prompt Evolution with Graph ODE for Out-of-distribution Fluid Dynamics Modeling
This work studies the problem of out-of-distribution fluid dynamics modeling. Previous works usually design effective neural operators to learn from mesh-based data structures. However, in real-world applications, they would suffer from distribution shifts from the variance of system parameters and temporal evolution of the dynamical system. In this paper, we propose a novel approach named \underline{P}rompt Evol\underline{u}tion with G\underline{r}aph OD\underline{E} (\method{}) for out-of-distribution fluid dynamics modeling. The core of our \method{} is to learn time-evolving prompts using a graph ODE to adapt spatio-temporal forecasting models to different scenarios. In particular, our \method{} first learns from historical observations and system parameters in the frequency domain to explore multi-view context information, which could effectively initialize prompt embeddings. More importantly, we incorporate the interpolation of observation sequences into a graph ODE, which can capture the temporal evolution of prompt embeddings for model adaptation. These time-evolving prompt embeddings are then incorporated into basic forecasting models to overcome temporal distribution shifts. We also minimize the mutual information between prompt embeddings and observation embeddings to enhance the robustness of our model to different distributions. Extensive experiments on various benchmark datasets validate the superiority of the proposed \method{} in comparison to various baselines.
https://openreview.net/pdf/c8b66405bc52ba1031d1591eec96d618612ff575.pdf
[ { "confidence": 4, "rating": 5, "review_id": "RMPUztzEcQ", "review_text": "The paper presents a new approach called Prompt Evolution with Graph ODE (PURE) for non-distributed fluid dynamics modeling. PURE first learns from historical observations and system parameters in the frequency domain to explore multi-view contextual information, which can efficiently initialize the cue embedding. Interpolations of the observation sequences are then merged into the graph ODE so that the time evolution of the model-adaptive cue embeddings can be captured. These time-evolving cue embeddings are then incorporated into the underlying predictive model to overcome spatio-temporal distributional variations. In addition the paper minimizes the mutual information between the cue embeddings and the observation embeddings to enhance the robustness of the model to different distributions. Finally, extensive experiments conducted on various kinds of benchmark datasets validate the superiority of the proposed PURE compared to various baselines.\n\n1. The idea of the paper is novel. It is the first to link prompt learning to dynamic system modeling for out-of-distribution problems. \n \n2. This paper is technically sound. PURE first learns initialized prompt embeddings from historical observations and system parameters, and then employs a graph ODE with interpolated observation sequences to capture the continuous evolution of their model adaptation under out-of-distribution changes. \n\n3. The experimental results show the effectiveness of PURE in different challenging environments.\n\n1. The contribution of the proposed method in dealing with the OOD problem needs to be further clarified since the advantages of PURE over the previous efforts, such as Refs. [7, 67, 14, 72], etc., to address the OOD problem are not listed.\n\n2. The writing of the paper needs to be improved. Some of the symbols in the method description section are not defined, e.g., what do P and N in Equation 9 refer to?\n\n3. The experiment is not comprehensive enough. (a) The reasons for selecting baselines are not explained. Data augmentation [66, 7], invariant feature learning [39, 69, 38], adversarial training [67, 7], and domain adaptation [32, 14] are mentioned in the paper in related work for solving the OOD problem, but they are not be compared as baselines in the experiment. (b) The experiments in this paper do not state whether noisy data are considered. (c) The authors just give a brief description of the results without analyzing the reasons behind the high performance.\n\n1. What are the advantages of PURE over previous efforts to address OOD? \n\n2. Some of the symbols in the method description section are not defined, e.g., what do P and N in Equation 9 refer to?" }, { "confidence": 3, "rating": 4, "review_id": "0i7XHiuD2S", "review_text": "- The paper aims to improve the out-of-distribution (OOD) generalization of fluid dynamics modeling.\n\n- Two types of OOD scenarios are targeted: OOD across different systems and OOD within the same system across different timestamps.\n\n- The paper proposes a framework named PURE, composed of modules including:\n - Multi-view Context Exploration, which explores spatio-temporal data using both the attention mechanism and the frequency domain;\n - Time-evolving Prompt Learning, which incorporates the interpolation of observation sequences;\n - Model Adaptation with Prompt Embeddings, which leverages time-evolving prompts to mitigate temporal distribution shifts.\n\n- Extensive experiments on a range of fluid dynamics datasets support the claim.\n\n- Significant topic: OOD generalization in fluid dynamics modeling.\n\n- Well-motivated, as OOD generalization is a crucial challenge in this field.\n\n- The presentation effectively delivers the message.\n\n- Extensive experiments have been conducted.\n\n- My major concern with the paper is that the OOD challenge in dynamics modeling is not well-formulated. The paper describes the OOD scenario verbally as \"*different dynamical systems could involve different parameters in underlying rules*\" and \"*during long-term auto-regressive forecasting, the input data distribution could vary hugely during temporal evolution,*\" which is straightforward and easy to understand. However, the mathematical formulation of these scenarios is absent. This formulation should be the foundational basis of the topic, as we need to clearly define the problem before addressing it.\n\n- Given the lack of mathematical formulation of the challenge, I find myself lost in the proposed approach section, unsure of the necessity for specific components. While I understand the function of each component, I cannot see why it is needed or which gaps it aims to bridge in the absent mathematical framework.\n\n- Why is the proposed method termed \"prompt\"? Is there a connection to prompt tuning in large language models?\n\n- How do you quantify the distribution shift in dynamics modeling? Can you rank the 'difficulty level' of OOD generalization in your experiments and analyze in which scenarios your method stands out and why?\n\nNA" }, { "confidence": 5, "rating": 5, "review_id": "tQZ3GEfz5x", "review_text": "This paper pioneers the connection of prompt learning with dynamical system modeling to address the challenge of out-of-distribution shifts. The proposed PURE method initializes prompt embeddings by learning from historical observations and system parameters.\n\n1.The paper is easy to follow. \n2.The proposed method is sound and innovative.\n3. The authors provide theoretical proof and show comprehensive experimental comparisons.\n\n1. Some results may be incorrectly labeled as suboptimal in table, and there are errors in the use of some symbols.\n2. The explanation of the experimental results is not detailed enough, making some experiments difficult to understand.\n3. The proposed method is aimed at OOD (Out-Of-Distribution), but the experiments lack comparison and discussion with methods specifically targeting OOD, such as [1] and [2].\nReference:\n[1] Kirchmeyer, Matthieu, et al. \"Generalizing to new physical systems via context-informed dynamics model.\" International Conference on Machine Learning. PMLR, 2022.\n[2] Yin, Yuan, et al. \"LEADS: Learning dynamical systems that generalize across environments.\" Advances in Neural Information Processing Systems 34 (2021): 7561-7573.\n\nQ1. There might be misuses of symbols in the paper, such as, Change \"xi\" to \"si\" in line 65, Change \"xq\" to \"xq\" in line 96, \"Zero-shot Experiments\" and \"Generalization Experiments\" in line 215 should be treated equally and placed on separate lines.\nQ2. Is there an issue with the second-best data in Table 2? For example, in the column w/OOD of SPHERICAL-SWE, the second-best should be DGPDE 0.0028. The corresponding improvement results also need to be modified.\nQ3. What does the clustering in Figure 5 represent? Could you provide a detailed explanation?\nQ4. The paper utilizes mutual information to decouple different prompt embeddings and observation embeddings, reducing the sensitivity of observation embeddings to different distributions. However, I don't quite understand the purpose of decoupling. Observational embeddings are related to the environment, and prompt embeddings are related to the environment as well; they are inherently correlated. Please provide further explanation." }, { "confidence": 2, "rating": 7, "review_id": "q2thl2PsL7", "review_text": "The paper proposes a graph ODE-based approach for OOD fluid dynamics modeling. PURE aims to learn time-evolving prompts via graph ODE for adaptation of spatio-temporal forecasting models on OOD scenarios. To address temporal distribution shifts, the interpolation of obersvation sequences are combined into graph ODE framework to learn evolution of prompt embeddings.\n\n- The paper proposes a new approach that connects prompt learning and dynamical system modeling which addresses OOD shifts.\n- By learning time-evolving prompts that adapt to changes in system parameters and temporal evolution, the approach can enhance model robustness.\n- The paper provides theoretical analysis on incorporating observations during evolution.\n- Experiments on diverse benchmarks show generalization ability to OOD and different prediction length.\n\nAs I am not an expert in this field, I am unable to find major concerns or weakness of the approach.\n- As the method is based on attention, the proposed approach may have limited scalability and take long computation time. Is there a comparison on these with the previous works?\n\nPlease address the questions in the Weaknesses." } ]
z7h7zMgyPJ
The Many Faces of Optimal Weak-to-Strong Learning
Boosting is an extremely successful idea, allowing one to combine multiple low accuracy classifiers into a much more accurate voting classifier. In this work, we present a new and surprisingly simple Boosting algorithm that obtains a provably optimal sample complexity. Sample optimal Boosting algorithms have only recently been developed, and our new algorithm has the fastest runtime among all such algorithms and is the simplest to describe: Partition your training data into 5 disjoint pieces of equal size, run AdaBoost on each, and combine the resulting classifiers via a majority vote. In addition to this theoretical contribution, we also perform the first empirical comparison of the proposed sample optimal Boosting algorithms. Our pilot empirical study suggests that our new algorithm might outperform previous algorithms on large data sets.
https://openreview.net/pdf/2671bac5a8424c491302de1f86dcfa5df321520d.pdf
[ { "confidence": 4, "rating": 8, "review_id": "JBrAWjWhFm", "review_text": "This paper presents an efficient and simple weak-to-strong learner that has optimal in-expectation error. In weak-to-strong learning, we are given a dataset of $m$ points from a distribution, and a $\\gamma$-weak learner that returns hypotheses from a class of VC dimension $d$. AdaBoost, which is a textbook weak-to-strong learner, makes $O(\\ln(m)/\\gamma^2)$ total invokations to the weak learner, and the best-known analysis for it shows that it suffers an in-expectation error $O\\left(\\frac{d\\ln(m/d)\\ln(m)}{\\gamma^2 m}\\right)$. Larsen and Ritzert (2022) constructed a weak-to-strong learner, that has expected error $O(d/\\gamma^2 m)$. Furthermore, they showed that this is the optimal error that one can obtain from $m$ training examples and a $\\gamma$-weak learner. However, the weak-to-strong learner by Larsen and Ritzert (2022) makes $O(m^{0.8}/\\gamma^2)$ invokations to the weak learner --- which is exponentially worse than AdaBoost. Another bagging-based-boosting algorithm due to Larsen (2023), which also achieves the optimal expected error of $O(d/\\gamma^2m)$, makes only $O((\\ln m)^2/\\gamma^2)$ invokations to the weak-learner. This is still a log factor worse than AdaBoost. Could we then hope to obtain a tighter analysis of the error of AdaBoost, and show that it obtains the optimal error with only $O(\\ln(m)/\\gamma^2)$ invokations to the weak learner? Unfortunately, no. Høgsgaard et al. (2023) showed that AdaBoost necessarily suffers an expected error which is at least $\\Omega(d\\ln(m)/\\gamma^2 m)$.\n\nCan we then at least shoot for a different weak-to-strong learner that attains the optimal expected error of $O(d/\\gamma^2m)$, and also invokes the weak learner only $O(\\ln(m)/\\gamma^2)$ many times (which is the AdaBoost gold standard)? This paper answers the question in the affirmative, with a remarkably simple weak-to-strong learner that they call Majority-of-29. The algorithm is exceedingly simple to describe: Partition the training dataset into 29 disjoint sub-samples of size $m/29$ each. Run AdaBoost on each subsample, and return the majority vote over the AdaBoosts. Since each AdaBoost makes only $O(\\ln(m)/\\gamma^2)$ calls to the weak learner, and we run a constant (29) many AdaBoosts, the total number of calls to the weak learner is $O(\\ln(m)/\\gamma^2)$ as required. Further, using an analysis similar to the recent majority-of-3-ERMs algorithm of Aden-Ali et al. (2023), the authors are able to show that the expected error of Majority-of-29 is $O(d/\\gamma^2m)$. The analysis from that work does not extend in a trivial manner, and the authors are required to make appropriate technical modifications and enhancements. The number 29 emerges from the analysis --- the authors require showing a new generalization bound for margin-based classifiers (they show a generalization bound of the order $O((d/\\gamma^2m)^{\\alpha})$), for $\\alpha=1/14$, and this lets them obtain the result for Majority-of-$g(\\alpha)$, where $g(\\alpha)=2/\\alpha+1$. The authors conjecture that the analysis of the generalization bound could be improved, and a Majority-of-3 might well suffice for optimal error.\n\nFinally, the authors also do a (somewhat-limited) empirical comparison of the the performances of the three optimal weak-to-strong learners mentioned above (LarsenRitzert, Bagging-based-boosting, Majority-of-29) as well as AdaBoost. The authors find that for large datasets, Majority-of-29 outperforms the other optimal weak-to-strong learners. On the smaller datasets, the authors find that Bagging-based-boosting outperforms Majority-of-29.\n\nThe weak-to-strong learner that the authors propose is optimal, and also requires the fewest calls to the weak learner among all optimal weak-to-strong learners that we know. More importantly, it is exceedingly simple and elegant. It also empirically outperforms the other optimal weak-to-strong learners (at least in the experiments performed by the authors). It is also nice to see that the analysis technique from Aden-Ali et al. (2023) finds new applications. The paper is well-written, sets up the stage (along with relevant prior work) well in the first two sections, and provides a nice high-level summary of the formal analysis in Section 3.\n\nWhile the theoretical contribution is substantial and undeniable, arguably, the experimental section is extremely limited (which is okay, and the authors admit this at the end, but this is still a limitation, especially if we want to draw conclusions about the empirical performance of the different weak-to-strong learners). The authors only perform experiments on 4 real-world datasets---there are admittedly many more out there, even just in the UCI repository. Could the authors at least elaborate on their rationale behind choosing the datasets that they did? (e.g., was it a random subset of 4? was it the first 4? was it the best 4 from 20 that they observed this trend on?) How might one believe that there is no cherry-picking of datasets involved? The authors make two conclusions from their experiments: 1) on larger datasets, Majority-of-29 outperforms both Bagging-based-boosting and LarsenRitzert. 2) on smaller datasets, Bagging-based-boosting outperforms Majority-of-29. Importantly, the former conclusion is drawn from results on just 3 datasets, and the latter is drawn from just 1! This can really make one skeptical about whether they should truly believe these conclusions. It is okay that this is just a pilot empirical study, but such claims call for significantly larger empirical validation. Also, please see the questions below.\n\n1) Do we have reason to believe that $O(\\ln(m)/\\gamma^2)$ calls to the weak learner is indeed the best gold standard we can hope for? To my understanding, the reason we need $O(\\ln(m)/\\gamma^2)$ calls to the weak learner in AdaBoost is because we want to use a margin-based generalization bound that expects a classifier to have at least $\\Omega(\\gamma)$ margin on every training sample---AdaBoost attains this guarantee only after $O(\\ln(m)/\\gamma^2)$ iterations. But could it perhaps be possible that there is a weak-to-strong learner out there that attains optimal error of $O(d/\\gamma^2m)$ with $o(\\ln(m)/\\gamma^2)$ calls to the weak learner?\n\n2) In the Experiments section, the x-axis in Figures 1 and 2 varies the number X of AdaBoosts trained on disjoint partitions in the Majority-of-X algorithm. But this is not a parameter in the other algorithms (BaggedAdaboost and LarsenRitzert). Hence, I would have expected to see a constant line for these other algorithms in the plots (like how the red and blue lines are constant in Figure 2). Why are there different numbers corresponding to different number of voting classifiers in BaggedAdaboost and LarsenRitzert in Figure 1 (and also for BaggedAdaboost in Figure 2)? Am I missing something?\n\nMinor/Typos: \\\nLine 133: It is 0 if half of hypotheses are correct and half are wrong --- this is only true **in a weighted sense** right? \\\nLine 299: this suggests*" }, { "confidence": 4, "rating": 3, "review_id": "vSepSGuYe8", "review_text": "This paper introduces a new Boosting algorithm, MAJORITY-OF-29, which achieves provably optimal sample complexity and is remarkably simple to implement. The algorithm partitions the training data into 29 disjoint subsets, applies AdaBoost to each subset, and combines the resulting classifiers through a majority vote. This approach not only matches the asymptotic performance of AdaBoost but also improves upon previous weak-to-strong learners in terms of simplicity and runtime efficiency.\n\n1. The paper introduces a novel method and provides detailed theoretical analysis.\n\n2. Existing experiments fail to demonstrate the effectiveness of the proposed method, and there is a lack of analysis and discussion on current experimental results.\n\nPlease see the weakness." }, { "confidence": 5, "rating": 7, "review_id": "3tvtCvfWPT", "review_text": "The authors present a new boosting algorithm: partition training data into 29 pieces of equal size, run AdaBoost on each, and output the majority vote over them. The authors prove that the sample complexity of MajorityVote29 is optimal and its running time is the same order as AdaBoost. Experimental results are also attached, which corroborate their theoretical findings.\n\n- Very strong and interesting result\n- Mathematically sound, based by my judgement\n- Good presentation, self-contained and well-structured\n\nN/A\n\nN/A" } ]
z6reLFqv6w
Learning diverse causally emergent representations from time series data
Cognitive processes usually take place at a macroscopic scale in systems characterised by emergent properties, which make the whole ‘more than the sum of its parts.’ While recent proposals have provided quantitative, information-theoretic metrics to detect emergence in time series data, it is often highly non-trivial to identify the relevant macroscopic variables a priori. In this paper we leverage recent advances in representation learning and differentiable information estimators to put forward a data-driven method to find emergent variables. The proposed method successfully detects emergent variables and recovers the ground-truth emergence values in a synthetic dataset. Furthermore, we show the method can be extended to learn multiple independent features, extracting a diverse set of emergent quantities. We finally show that a modified method scales to real experimental data from primate brain activity, paving the ground for future analyses uncovering the emergent structure of cognitive representations in biological and artificial intelligence systems.
https://openreview.net/pdf/8fbf255282581472847dd90c5114aca7f4d35e2d.pdf
[ { "confidence": 3, "rating": 5, "review_id": "ILd0j0N52p", "review_text": "The article proposes a learning scheme aimed at detecting emergent quantities from time series data of systems made of many interacting parts, such as the brain. To this end the authors combine \"minimum mutual information\", a previously introduced emergence criterion, with SMILE, a differentiable lower bound estimator for mutual information. Differentiability is crucial for the loss function to be optimizable efficiently. They apply this architecture to two examples: First, a series of random bit strings with time-correlated parity, where parity is considered the emergent quantity. Second, real-world data of macaque brain activity. The approach successfully identifies parity in the first example. The authors claim that an emergent feature has been learned for the second example also.\n\nWhile the individual parts of the learning scheme are not new, their combination into a differentiable architecture is original and seems like a promising direction to me. The analysis seems sound, even though I found the presentation at times a bit hard to follow as some parts seem to be missing. The individual quantities are mostly clearly defined and the individual results are statistically significant in terms of error bars.\n\nFrom the article alone, I could not fully understand the architecture and its training procedure that is illustrated in Fig. 1. I could not find code that would reproduce it, or a detailed pseudo-code description of the algorithm. It is unclear to me what emergent feature was found for the monkey example, or how Fig. 4 proves that any such feature was found. While the architecture and direction seems promising to me, a few more benchmarks would help make the case that this scheme can find emergent features in many settings. The two examples shown are one toy example with unnatural time dynamics and one real world example where it is hard to understand the dynamics from first principles. Benchmarking this new method on more standard examples with emergent behavior, such as Ising models, would be more convincing.\n\n1) How does Figure 4 prove that an emergent feature was found? What is the quantity on the x-axis in this figure?\n2) Can you share the code of the architecture and its training or describe it in more detail?\n3) Would you expect this method to find emergent features in standard, well-understood systems, such as Ising models?\n4) Can you explicitly write down the prediction-only objective in Fig.2?\n\nOverall, I find the approach promising but not yet tested on enough examples to support its ability to find emergent features." }, { "confidence": 4, "rating": 6, "review_id": "PIkakmKRkD", "review_text": "This paper introduces a method for learning the causally emergent representation of time series data. Based on the Partial Information Decomposition (PID) and ΦID definition of emergent variables, the paper utilizes variational information lower bounds to estimate and optimize the emergence objective function. The paper further includes a Minimum Mutual Information term and a penalty term, to reduce redundancy and discover diverse emergent variables, respectively. Experiments on a synthetic dataset and a primate brain activity dataset show that the method is able to discover diverse causally emergent representation.\n\nDiscovering causally emergent representations is a very interesting topic, and has significance in a wide range of scientific disciplines. The paper is inspirational and written clearly. Although the components of the method, i.e. definition of emergence objective function, and variational bounds for mutual information, are not new, their combination to discover causally emergent representations in a learnable way is interesting, and to my knowledge, novel.\n\nAs discussed above, the novelty is a little limited. This can be compensated by solid evaluations with a wide range of interesting datasets. I think the place the paper needs most improvements is more diverse and extensive evaluations. The paper can benefit from a few more datasets, both synthetic and real world, including the other datasets used in [1] and other references. If there exists baselines for discovering causally emergent representations, those baselines should also be compared against.\n\n\n\nReference:\n\n1. Rosas, Fernando E., et al. \"Reconciling emergences: An information-theoretic approach to identify causal emergence in multivariate data.\" PLoS computational biology 16.12 (2020): e1008289.\n\nN/A" }, { "confidence": 3, "rating": 7, "review_id": "aC6COYJ4FS", "review_text": "The paper presents a method for identifying emergent variables in time series data through a novel machine learning architecture. It uses unsupervised learning for representation and information theory to find emergent properties in systems, which are often complex and not easily describable at the microscale level. The paper is motivated by the fact that unsupervised learning can be a powerful tool for identifying emergent properties, but current approaches limit to only information theory.\n\nThe method rests on maximizing an objective defined by subtracting the mutual information of state variables at time t and the coarse graining at time t + 1 from the mutual information of the coarse grainings at t and t + 1. In other words, the amount of emergent information. Information theoretic definition of emergence is thus used to facilitate unsupervised learning. The method is tested on synthetic data and real-world ECoG brain activity data, demonstrating its ability to identify known and novel emergent features effectively.\n\nExperiments are conducted on a synthetic dataset and a macaque brain activity dataset. For the synthetic dataset, the method is able to estimate the ground-truth value of psi (the difference that is central to the objective function). For ECoG data, skip connections were introduced into the architecture, and once again found emergent representations. \n\nThe paper concludes with a discussion on related (info theory) work, limitations, and future steps.\n\n### Clarity \n- Diagram features are well designed and results features are clear and salient\n- Though writing is somewhat unstructured, the shorter-range explanations are well-done\n- Methodology is given in detail. Lots of helpful explanation of relevant information theory, as well as the overall approach\n\n### Quality\n- Good to have primate brain data, though more interpretation would help \n- Covers all the basic needs for a new method: real data, novel setup, suitable metrics (though they need more explanation)\n- Experimental setup is well-designed to demonstrate that emergent variables are being learned \n\n### Originality\n- As far as I know, applying the information theoretic definition of emergent variables as an objective and training in this setting is novel\n\n### Significance\n- An innovative idea that shows promise. While there could be more experimentation, this is a promising and new direction.\n\n### Clarity\n- It's not immediately clear how to interpret results. The paper shows figures, but it doesn't explain them much. Interpreting them requires a lot of re-reading the methods section\n- Writing is somewhat verbose and unstructured, and occasionally reads like a process statement\n\n### Quality \n- This idea is compelling and innovative! The loss built on MI of coarse grainings and state variables is intuitive while creating a solid foundation for taking advantage of the capabilities of unsupervised learning\n- On the ECoG dataset, giving intuition/semantic understanding of emergent features (or at least attempting interpretation) would be cool\n- Limited experiments on real data in general - ultimately, only one experimental setting is shown as far as I understand. The synthetic problem, while useful, is simple\n- Lacking baselines or extensive comparison to existing methods, even if purely information-theoretic\n\n### Significance\n- It would help to have clearer comparison to existing methods so that we could see the value-add of this innovation, not just the novelty and value alone\n\nNone" }, { "confidence": 4, "rating": 7, "review_id": "m16sJVMc1u", "review_text": "The paper introduces a novel objective function and deep learning architecture that are targeted to extract emergent latent representations from time series data. Motivation is very clear. The definition of emergent latent representation interesting and useful. The utilization of mutual information estimators (lower bounds thereof) is smart. Evaluations are restricted to a fully artificial and a fully neurobiological dataset.\n\nThe study of emergence and its conceptual and mathematical formalization is of general interest to neural information processing and the involved (part-of) cognitive science subdiscipline within. The authors utilize an existing definition thereof [30] as well as an approximation technique of a lower bound on mutual information (SMILE, [32]), which they combine in a highly elegant manner to yield their learning architecture. \n\nThe usage of a linear temporal predictor with learnable residuals is a great way to bootstrap the system’s abilities. \n\nEven multiple emergent latents can be successfully extracted. \n\nA real-world dataset indicates applicability beyond artificial data. \n\nPaper is very well written – relatively easy to comprehend and all steps taken are very well motivated and sufficient background is provided.\n\nSystem evaluations are minimalistic and not as convincing as I had hoped. Both, comparisons to potential baseline algorithms as well as more ablations are missing. \n\nFurthermore, one artificial dataset and one not well-motivated real-world neural dataset seems not enough to warrant publication. \n\nIn particular, I would have expected at least one if not multiple DL/Neural Network baselines that do not pursue the information-theoretic objective but simply a standard temporal prediction objective. Those probably do not work on the parity problem at all but at least an attempt seems needed. That is, use a DREAMER-like world model learning architecture with probabilistic latents and see if structure emerges. \n\nAblations could have explored more than just the same architecture without the penalty / adjustment term or without the macroscopic estimator. Further, in the artificial dataset the correlation coefficients $\\gamma$ are quite high – was this necessary? When does this break down? Evaluations with a non-linear prediction pipeline would also be useful.\n\nEmergence comes in many forms – I wonder if you could discuss alternative definitions / alternative perspectives on the matter?\n\nLine 52 – a “not” too much. \n\nEq (2): can you motivated the adjustment term further the summand (n-1)min_i… ? Are there alternatives to this that would be more targeted towards actually identifying true redundant mutual information? \n\nEq (4). Could you motivate the clip operator slightly more? \n\nLine 102 should read “accurately”\n\nParagraph 113ff: maximize / minimize information – I am not sure if this is worded the right-way round – could you double check and slightly reword? \n\nLine 137 – should not read “also”\n\nLine 142 – one “is” too much\n\nLine 190ff. removing the macroscopic MI term seems not to save much – or does it? The observation is interesting, but I wonder if the authors want to make a computational argument here as well. \n\nAbout the biological dataset – this is very ad-hoc somewhat. What does this analysis tell us really except that there are some complex spatio-temporal statistical correlations in the data? I find this one marginally useful." } ]
z6KNvOe9zQ
Vision Model Pre-training on Interleaved Image-Text Data via Latent Compression Learning
Recently, vision model pre-training has evolved from relying on manually annotated datasets to leveraging large-scale, web-crawled image-text data. Despite these advances, there is no pre-training method that effectively exploits the interleaved image-text data, which is very prevalent on the Internet. Inspired by the recent success of compression learning in natural language processing, we propose a novel vision model pre-training method called Latent Compression Learning (LCL) for interleaved image-text data. This method performs latent compression learning by maximizing the mutual information between the inputs and outputs of a causal attention model. The training objective can be decomposed into two basic tasks: 1) contrastive learning between visual representation and preceding context, and 2) generating subsequent text based on visual representation. Our experiments demonstrate that our method not only matches the performance of CLIP on paired pre-training datasets (e.g., LAION), but can also leverage interleaved pre-training data (e.g., MMC4) to learn robust visual representations from scratch, showcasing the potential of vision model pre-training with interleaved image-text data.
https://openreview.net/pdf/d285c73049fe1865644d41380d358b90dcf98a21.pdf
[ { "confidence": 4, "rating": 6, "review_id": "RzoB0WhCLO", "review_text": "This paper introduces a vision backbone pre-training method named Latent Compression Learning (LCL) to utilize interleaved image-text data. The proposed LCL approach maximizes mutual information between the inputs and outputs of a GPT-like model in autoregressive manner. The proposed method integrate both discriminative and generative objectives by contrasting preceding context and generate subsequent text based on visual representation. The extensive experiments demonstrate that LCL not only matches the performance of existing models like CLIP on paired datasets (e.g., LAION) but also effectively leverages interleaved pre-training data (e.g., MMC4) to learn robust visual representations from scratch.\n\n1. The paper is well written and easy to follow.\n\n2. The paper introduces a new pre-training method, Latent Compression Learning (LCL), which utilizes interleaved image-text data for visual backbone pre-training for the first time. And this can effectively leveraging large-scale web-crawled data, which is easier to crawl compared to the image-text pairs.\n\n3. Extensive experiments are conducted, demonstrating the effectiveness of the proposed method on both paired datasets (e.g., LAION) and interleaved datasets (e.g., MMC4).\n\n1. From Table 5, it appears that solely leveraging image-text pairs with LCL does not provide benefits over the CLIP baseline. However, when using the MMC4 dataset, which is manually composed of interleaved text, there is significant performance improvement on downstream tasks. I am curious whether this performance gain results from the increased number of training samples (i.e., the total number of images used during training).\n\n2. According to Table 3, utilizing original interleaved datasets such as Obelics does not yield any performance gain. In comparison, the MMC4 dataset requires more computation for data filtering with the CLIP score and the use of image-text pairs to create interleaved data. It is unclear how to efficiently utilize the original interleaved data directly crawled from the web. Do you have any insights on the differences between these two types of interleaved datasets?\n\n1. The scaling behavior is not demonstrated. While the author has shown the effectiveness of training on MMC4 and Laion-400M, it remains unclear how the model performs and correlates across different dataset scales. Understanding this could provide valuable insights into the feasibility and performance of scaling the proposed method to larger datasets, such as DataComp 12.8B and Laion 2B.\n\n2. Can you show the seen samples of each model? It would be helpful for readers understanding the scale of model's training." }, { "confidence": 4, "rating": 4, "review_id": "eAijpzsyCu", "review_text": "The paper tackles the problem of vision model pre-training. More exactly, it aims to exploit the interleaved image-text data that is very prevalent on the Internet. It proposes Latent Compression Learning that maximises the mutual information between the inputs and outputs of a causal attention model. When visual pre-training is applied to interleaved image-text data, visual latents are extracted using a visual encoding network and then combined with the text and fed to the causal model.\n\nThe paper tackles an important task and proposes an interesting method that may be of interest to the research community.\n\nWhile the method seems interesting, my main concern is related to the experimental part that I find confusing. For example for BEiT3 the numbers reported are different from the ones reported in the paper.\n\nAlso, I think that for Tab 6, more multi-modal LLMs need to be included. While there can be a debate on fair vs unfair comparison, I think that you present results on a dataset these need to be complete. So, they can be greyed out, put in a different section, etc and explained why the comparison is not fair, but I don't think it's suitable for models that perform better to not be included at all. So, missing comparisons:\n\nFang, Yuxin, et al. \"Eva: Exploring the limits of masked visual representation learning at scale.\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023.\nZou, Xueyan, et al. \"Generalized decoding for pixel, image, and language.\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023.\n\nor even some very recent ones for the sake of completeness:\nSun, Quan, et al. \"Generative multimodal models are in-context learners.\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024.\nLiu, Haotian, et al. \"Improved baselines with visual instruction tuning.\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024.\n\nWhy is OpenAI CLIP greyed out?\n\nWhy are the numbers reported for BEiT3 different than what they report in the paper? For example for Flicker 3K, R@1, it's reported 73.2 while the paper reports 88.2\n\nWhy, for example the comparison with BEIT3 is only shown in Tab.1 when they report results on VQAv2? The same question for CoCA?" }, { "confidence": 4, "rating": 4, "review_id": "Okk5ri69uJ", "review_text": "The paper pre-trains models with a combination of a contrastive image-text objective and a generative language objective. The authors provide many results on image classification and vision-language tasks suggesting the competitiveness of the method in controlled settings.\n\nS1. The paper is well framed and motivates nicely the need to pre-training on interleaved data.\n\nS2. The paper gives good intuition about what the various equations mean, making the manuscript more accessible.\n\nS3. Consideration of many pre-training datasets including LAION-400M, MMC4, and OBELICS.\n\nS4. Extensive objective ablations spanning both contrastive and generative losses.\n\nW1. [MAJOR] The paper presents the objective as novel (L44-54); however, it seems similar to CoCa (Yu et al., 2022.), which also employs a contrastive loss and a next token prediction loss. Can the authors clarify the differences and why the formulation is novel?\n\nW2. It seems equation 3 appears in prior work; however, when it is first presented it seems to be presented as a novel insight. I recommend making the attribution to prior work more clear before introducing the equation.\n\nW3. In the relation to previous pre-training tasks, it is important to also relate to CoCa. It seems the objective is pretty much the same suggesting that the objective is not actually a contribution of the work. Is there any reason CoCa is not mentioned here given the similarities?\n\nW4. Make sure it is clear that you train on sequences with more than one image per sample (I am assuming this is true because you train on MMC4, but when explaining the objectives you include only one sequence for simplicity). 3.3 is a nice place to add this information. Also any special tricks to get multi-image to work? If so, it could also be nice to mention this.\n\nW5. Why are the numbers for Flamingo in Tab 1 for IN-1k so low? Flamingo uses a pre-trained vision backbone, so I expect numbers to be good here.\n\nW6. Is the COCO CIDEr evaluation protocol zero-shot? If so the number in table 4(a) of 87.5 looks extremely high relative to open flamingo and Idefics. Please double check this number and if few-shot prompting is used here, please make this clear. Also why is Gen. only worse than Con. only for captioning. How is contrastive learner able to do captioning?\n\nW7. In the frozen transfer setting in Tab. 6 are all models fine-tuned on the same data? If so, what data? The specifics of the experiment are not clear to me, making it hard to interpret the results.\n\nPlease see the weaknesses section for specific questions and topics to address." }, { "confidence": 4, "rating": 5, "review_id": "20U40JIM6z", "review_text": "This paper aims to explore the use of weak supervision signals in multimodal interleaved image-text data to pretrain visual encoder, compressing the distribution of high-level features into the visual encoder. The paper employs contrastive loss and autoregressive loss to train the model. To prevent the collapse of visual representations, an entropy maximization constraint is applied. The paper derives the equivalence of maximizing the mutual information between the model's input and output as a latent compression and entropy constraint. The proposed pre-training method, called LCL, achieves performance comparable to CLIP on paired data while better utilizing the supervision information in interleaved data.\n\nThis paper explores how to use weak supervision signals in more general interleaved image-text data to accomplish visual pre-training. Its advantages are as follows:\n1. Unlike previous approaches that fine-tune pre-trained visual models to align visual representations with the text space (Flamingo, LLaVA), this paper explores how to train visual models from scratch using interleaved image-text data. This is a meaningful exploration.\n2. To prevent the collapse of visual representations, where autoregressive generation relies solely on textual information, this paper imposes an entropy constraint and further derives it as optimizing mutual information. This approach aids in model training.\n3. Extensive quantitative experiments have validated the effectiveness of the visual models trained using this approach.\n\nThis paper has the following areas for improvement:\n1. In some cases, the textual context may have little relevance to the image. It is worth investigating whether such data could harm the model's performance.\n2. The paper lacks qualitative experiments to further demonstrate the effectiveness of the method. Designing reasonable visualization analyses would help to further elucidate the advantages of the approach.\n3. Similar to CLIP, further demonstrating the model's transfer learning performance through domain adaptation tests and few-shot metrics would be beneficial.\n\nsee weaknesses." } ]
z4eVwH484M
Unveiling the Hidden: Online Vectorized HD Map Construction with Clip-Level Token Interaction and Propagation
Predicting and constructing road geometric information (e.g., lane lines, road markers) is a crucial task for safe autonomous driving, while such static map elements can be repeatedly occluded by various dynamic objects on the road. Recent studies have shown significantly improved vectorized high-definition (HD) map construction performance, but there has been insufficient investigation of temporal information across adjacent input frames (i.e., clips), which may lead to inconsistent and suboptimal prediction results. To tackle this, we introduce a novel paradigm of clip-level vectorized HD map construction, MapUnveiler, which explicitly unveils the occluded map elements within a clip input by relating dense image representations with efficient clip tokens. Additionally, MapUnveiler associates inter-clip information through clip token propagation, effectively utilizing long- term temporal map information. MapUnveiler runs efficiently with the proposed clip-level pipeline by avoiding redundant computation with temporal stride while building a global map relationship. Our extensive experiments demonstrate that MapUnveiler achieves state-of-the-art performance on both the nuScenes and Argoverse2 benchmark datasets. We also showcase that MapUnveiler significantly outperforms state-of-the-art approaches in a challenging setting, achieving +10.7% mAP improvement in heavily occluded driving road scenes. The project page can be found at https://mapunveiler.github.io.
https://openreview.net/pdf/3a7293ee08a835f73148a1a4ebcffbb2a8b5e0a6.pdf
[ { "confidence": 5, "rating": 5, "review_id": "thlV3lCaCJ", "review_text": "This paper aims to improve vectorized HD map construction for autonomous driving. Inspired by the global feature association in traditional offline HD mapping, the proposed MapUnveiler processes input frames in a clip-based manner and hopes to resolve occlusions using information from previous frames. Built up MapTRv2, MapUnveiler introduces clip tokens together with the Inter-clip and Intra-clip Unveiler modules to update the map queries with temporal information. Experiments on nuScenes and Argoverse2 datasets demonstrate the superior performance of the proposed method, especially on highly-occluded scenes.\n\n1. The idea of incorporating and aggregating clip-level information for online vectorized HD mapping is reasonable and is more akin to how humans drive. The proposed method has more thoughtful designs than early works such as StreamMapNet to better handle occlusions and incorporate long-range information. \n\n2. The proposed MapUnveiler obtains state-of-the-art results in various experimental settings. The improvements over previous methods are especially prominent in the large 100mx50m setting and the highly-occluded scenes collected by the authors. \n\n3. Extensive ablation studies enumerate the choices of almost all hyper-parameters or model components, which helps better understand and break down each element's contributions.\n\n1. The clarity of the method description is poor, making it very hard to thoroughly understand the proposed architecture. Details are discussed below:\n \n - The method explanation is not self-contained: i) The Inter-clip Unveiler section refers to the TTM and directly skips all details. There is no information at all about how is the compact memory token generated from the denser map queries; ii) The \"loss\" section refers to MapTRv2 and again skips all details. The authors should not assume the general audience to be aware of the concrete details of TTM and MapTRv2. The core formulation of these components should be elaborated with texts or equations, while full details can go to the appendix.\n - The definitions of the temporal window T and the stride S are unclear. Based on the text descriptions and the common definition of stride, my understanding of \"T=3 and S =2\" is that \"each clip has 3 frames, and every two consecutive frames have a temporal gap of 1.\" However, the symbols in L177-178 seem to suggest other meanings of T and S.\n - The description of the inference mechanism is also vague. Is the MapUnveiler executed per frame or per clip? Figure 2 seems to suggest the per-clip inference where the predictions of T frames are obtained together. If this is the case, does it hurt the actual response frequency? \n \n In short, Section 3 of the paper lacks significant details, and I cannot properly understand MapUnveiler's exact formulation. Given that the authors answer \"No\" to Question 5 of the Checklist, I have to raise concerns about the paper's reproducibility.\n\n2. There is no detail on how the pre-training and fine-tuning are conducted. Do you initialize the MapNet by training MapTRv2? If this is the case, how are the training epochs split for the MapNet pre-training and the end-to-end MapUnveiler fine-tuning? If the 24/6 epochs for nuScenes/Argo2 are only for the fine-tuning stage, then the comparisons in the main table are unfair, as other methods in the table have not fully converged. \n\n3. The main comparison results are incomplete. Most previous papers provide the nuScenes results of both short and long training schedules, but the main table only presents short-schedule results. Considering the last question about the pre-training and fine-tuning, the authors should complement the table with long-schedule results to show that MapUnveiler can obtain consistent performance boosts when all the methods are fully converged. This concern is backed up by the fact that MapUnveiler's improvement is much smaller on Argo2 compared to nuScenes -- based on my empirical experience, previous methods like MapTRv2 and its followups converge faster on Argo2, and training for 6 epochs is close to convergence. This probably suggests that the large performance gaps on nuScenes come from unfair training settings. \n\n4. Your interpretation of StreamMapNet and SQD-MapNet's Argo2 training epochs is wrong. These two methods employ a different frame sampling strategy at training time compared to MapTRv2, but their effective number of training samples is the same as MapTRv2. Therefore, the claim about the \"longer training schedules\" in the main table's caption is misleading.\n\n5. The venues in the main table are not accurate. HIMap[49] and MGMap[24] are accepted by CVPR2024, and the information was already available at the time of NeurIPS submission. Furthermore, a recent HD map construction method, MapTracker[A], also studies temporal modeling and should be very relevant, but it is missing in the discussion and related works. \n\n [A] MapTracker: Tracking with Strided Memory Fusion for Consistent Vector HD Mapping, arXiv:2403.15951\n\nThe paper studies an important problem (temporal information association) in online HD map construction and proposes a reasonable method. However, the poor clarity and the potentially incomplete/unfair comparison results raise serious concerns about the paper's quality and reproducibility. My current rating is reject, and I will consider changing the score if the main weaknesses are properly addressed." }, { "confidence": 3, "rating": 7, "review_id": "yGJBNRHryO", "review_text": "The authors propose a new approach for constructing vectorized high-definition maps that exploits temporal information across adjacent input frames. The model, which they call MapUnveiler, operates at the clip-level and consists of an intra-clip unveiler which generates vectorized maps for T frames and an inter-clip unveiler which uses a memory module to aggregate information between clips. The authors present results on two standard benchmarks, vectorized HD map construction benchmarks (nuScenes and Argoverse2) and demonstrate the model’s superior quantitive performance to several previously proposed approaches. They also show several qualitative examples of how MapUnveiler can better handle occlusions in the input images.\n\n- The paper is well-written and contextualized well within prior work.\n- The methodology is novel and well-motivated.\n- The results are strong on the two tested datasets, both quantitatively and qualitatively.\n- Many different analyses and ablations were included to justify the design decisions used within MapUnveiler and show its strengths.\n\n1. The methods is dense and a bit hard to read. The architecture figures help but are also a bit difficult to parse through. It would be helpful to try to weave more intuition into the text.\n2. Claiming \"-9.8%\" is significant but \"-6.0%\" is comparable in the robustness to occlusion section seems a bit arbitrary (and potentially overstating MapUnveiler's performance, as a 6% drop is still considerable). I suggest the authors rephrase this sentence (and address similar claims in the paper).\n\nThere are several typos throughout the paper. I have enumerated some here, but encourage the authors to do a detailed proofread:\n- 127: With there\n- 129: mapnet -> MapNet\n- 161: bev -> BEV\n- 167 parenthesis \n- 192 backwards parenthesis \n- 294: In addition, if we choose too short\n\n1. Have the authors tried quantized models to reduce GPU memory? It could be interesting to see if the gains from larger window sizes outweigh the losses from quantization.\n2. The model still seems to struggle with some occlusions (a 6% drop from the standard split). Why do the authors think that is? Are these just very difficult cases or issues with the model?\n3. The one limitation that was discussed seems like it can be tested. How does randomly dropped intermediate frames affect model performance?" }, { "confidence": 5, "rating": 6, "review_id": "hLDba7LQ3O", "review_text": "This work presents a method called MapUnveiler, which aims to improve the construction of vectorized HD maps for autonomous driving. MapUnveiler uses a novel clip-level pipeline to unveil occluded map elements by relating dense image representations with efficient clip tokens and propagating inter-clip information. This approach leverages temporal information across adjacent input frames, addressing the limitations of single-frame and streaming inference methods. The model achieves state-of-the-art performance on the nuScenes and Argoverse2 benchmark datasets, demonstrating promising improvements in challenging scenarios with longer perception ranges and heavy occlusions.\n\n1. The introduction of a clip-level pipeline for vectorized HD map construction effectively addresses occlusion issues and leverages temporal information across multiple frames.\n2. The method utilizes clip tokens to propagate map information efficiently, reducing redundant computations and enhancing prediction consistency.\n3. Extensive experiments demonstrate that MapUnveiler achieves state-of-the-art performance on nuScenes and Argoverse2 benchmarks, particularly in challenging scenarios.\n\n1. The community has noticed a severe data leakage issue with utilizing nuScenes and Argoverse2 datasets for online mapping evaluation {1, 2}, as these datasets are not intentionally built for online mapping. It might also be necessary to validate the proposed method on geo-disjoint training and validation sets.\n2. It would be good to see the analysis of added model compacity due to the introduction of the proposed intra-clip unveiler and inter-clip unveiler.\n3. It seems the proposed intra-clip unveiler and inter-clip unveiler are adaptable to any single-frame inference online mapping methods. It would be good to validate the effectiveness of the proposed modules on other baseline methods.\n4. The authors are encouraged to investigate the consistency of estimated HD maps across frames of the proposed method compared to existing methods with \"inconsistent and suboptimal prediction results\" (mentioned in Line 7).\n{1} Augmenting Lane Perception and Topology Understanding with Standard Definition Navigation Maps.\n{2} Localization Is All You Evaluate: Data Leakage in Online Mapping Datasets and How to Fix It.\n\n1. What do the map queries stand for? Can they be transferred directly to vectorized HD maps?\n2. Is the map decoder adopted from MapTRv2?\n3. Are map tokens generated from the intra-clip unveiler the refined version of map queries?" }, { "confidence": 4, "rating": 5, "review_id": "nyiAWoHIB9", "review_text": "This paper proposes a clip-based vectorized HD map construction paradigm for the processing of long temporal sequence, in which occluded map elements are unveiled explicitly by efficient clip tokens. Through clip token propagation, MapUnveiler achieves effective utilization of long-term temporal map information by associating inter-clip information, in which clip tokens are propagated rather than dense BEV features. Experiments demonstrate that MapUnveiler boosts the performance on public benchmark datasets, also for more challenging setting like long-range perception and heavily occluded driving scenes.\n\n1. This paper is well-written and easy-to-follow. Figures clearly conveys the intended message.\n2. “Unveiling the hidden” and clip token propagation are reasonable and effective strategy for static Map element detection, which is practical and alleviates the problem to some extent.\n3. The proposed method demonstrates strong performance on benchmark dataset, comprehensive experiments and ablation studies justify the model design.\n\n1. As mentioned at line 227, this work is built on pretrained frame-level MapTRV2 and fine-tuned, thus the comparison can be unfair. Results without pretraining are required to verify your effectiveness.\n2. At line 53 and BEV Updater in line 151, for occluded features, how to select the tokens that are visible in certain frames? Seems tokens within the temporal window are fully utilized for BEV update by cross attention, how to determine whether these tokens contain unblocked information? More explanations are required.\n\nWhat is the experiment result for geometric-based dataset split as mentioned in [1] and [2]? Besides, what is the additional computing costs considering the injection of temporal clip token?\n[1] Yuan, Tianyuan, et al. \"Streammapnet: Streaming mapping network for vectorized online hd map construction.\" Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision. 2024.\n[2] Lilja, Adam, et al. \"Localization is all you evaluate: Data leakage in online mapping datasets and how to fix it.\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024." } ]
z4duW3KzlD
Gated Inference Network: Inference and Learning State-Space Models
This paper advances temporal reasoning within dynamically changing high-dimensional noisy observations, focusing on a latent space that characterizes the nonlinear dynamics of objects in their environment. We introduce the *Gated Inference Network* (GIN), an efficient approximate Bayesian inference algorithm for state space models (SSMs) with nonlinear state transitions and emissions. GIN disentangles two latent representations: one representing the object derived from a nonlinear mapping model, and another representing the latent state describing its dynamics. This disentanglement enables direct state estimation and missing data imputation as the world evolves. To infer the latent state, we utilize a deep extended Kalman filter (EKF) approach that integrates a novel compact RNN structure to compute both the Kalman Gain (KG) and smoothing gain (SG), completing the data flow. This design results in a computational cost per step that is linearly faster than EKF but introduces issues such as the exploding gradient problem. To mitigate the exploding gradients caused by the compact RNN structure in our model, we propose a specialized learning method that ensures stable training and inference. The model is then trained end-to-end on videos depicting a diverse range of simulated and real-world physical systems, and outperforms its ounterparts —RNNs, autoregressive models, and variational approaches— in state estimation and missing data imputation tasks.
https://openreview.net/pdf/b60e4a43045e2eeab67a3948852e749fe455f871.pdf
[ { "confidence": 4, "rating": 7, "review_id": "K9yByiu9ja", "review_text": "The paper presents a deep state-space model architecture with non-linear transitions and emissions. The model disentangles the latent representation for the dynamics and the one for the observed data at each time step - allowing therefore effective state estimation at future time steps and the ability to deal with missing data imputation.\nInference is performed with a deep Extended Kalman Filter, that relies on a RNN architecture to make a more efficient approximate computation of the Kalman Gain (KG) and smoothing gain (SG).\nThe method is tested on a number of simulated and realistic approaches, and it outperforms competing architectures.\n\n1. Non-linear/deep state-space models are being used more and more in many applications. Parameter learning and state estimation is however challenging in this setting, and this paper provides an interesting method for this\n2. The method is more scalable than comptering KF-based methods thanks to the dynamics network approximation, but still effective despite the aproximation\n3. The method builds on some models in the literature, but provides some useful novel components\n4. The authors did extensive and well-though experiments/ablations, comparing with many SOTA models\n5. There is an extensive appendix covering many details that did not fit in the main text. I particularly appreciated \"A.11.1 Python intuitive code.\"\n\nThe paper is not straightforward to read (had to read it carefully twice), mostly because of the way the required derivations are presented.\nThe notation used is somewhat not conventional within the ML-heavy NeurIPS community, and should be improved/clarified:\n1. In Section 4 you use o^+ notation which is not common in the ML community. Can you clarify what it means and why you need it? This explanation needs to be done in section 4, not referring to a different section.\n2. Similarly, what is s in line 219 and why do you need to introduce this notation? The sentence in line 219-221 is key but unclear\n3. Why do you need to define the \"gt\" in line 229?\n4. Not sure the SI perspective helps in the ML-heavy NeurIPS community, it brings confusion. Maybe can be added in the appendix?\n\nIn terms of novelty, the final model seems to me more similar to the Kalman VAE (KVAE) model than what the authors claim. Your model can almost be seen as a modification/extension to the KVAE in which you add the RNN approximation to avoid the O(n^3) complexity, model the transition noise covariance and use a slightly different parameterization for the dynamics network.\nCan you clarify the differences between your model and the KVAE? \nIn line 96 and Table 6 in the appendix, I don't think your KVAE description is correct: it has a setup very similar to your model which allows to estimate the state dynamics, and allows for direct optimization unlike what you claim.\nI am not as familiar with the other KF-based methods mentioned, but make sure your description is correct.\n\nMinor comments:\n1. Line 53: typo \"To to\"\n2. Line 71: typo \"We\" -> \"we\"\n3. You introduce gamma in line 144, but only say what it is in line 150, making the reader wonder if it was defined above and look for it\n4. Line 283: you say \"with n=3m\" without specifying what n and m are. Even if they are defined before, being a notation-heavy paper, better to remind the reader what n and m are.\n\nI am happy to increase the score as long as the comments/questions in the weaknesses section are clarified" }, { "confidence": 4, "rating": 8, "review_id": "dwidRzlUm1", "review_text": "The paper introduces a very well theoretically motivated State-Space Model learning approach, which is implemented by a gated inference network. The network implements a Hammerstein-Wiener model within a modularized deep learning architecture. It uses GRU cells to mimic Kalman Filtering operations. Forward as well as forward-inverse processing routines optimize the hidden state space estimations. Several theoretical components add to the paper contribution. Evaluations show superior performance on several challenging toy problems with noisy data (pendulum, double pendulum, ball bouncing in irregular polygon, as well as odometry prediction from kiti data) evaluating state estimation and imputation tasks.\n\nPaper is very well-structured. The work is also very well-motivated and well-embedded into the literature. \n\nThe theoretical motivation and system derivations are impressive and usefully embed the author’s GIN system into the Kalman Filtering background. Approximating everything in a variational inference manner via estimations of Gaussians and their Covariances is efficient.\n\nTheorems 3 and 4 offer a theoretical derivation for ensuring stability of the unfolding recurrence. \n\nThe evaluations contain sufficiently challenging problems. Performance is compared with many alternatives, showing superior performance nearly throughout. Only in Table 3 GIN was partially beaten by DeepVO.\n\nThe theorems 3 and 4 are not really experimentally evaluated. Is instability observed when the recurrent matrix is not modified as proposed? The theorem’s proposal should be verified experimentally. \n\nEven more elaborate evaluations would of course be great. Seeing the great content and the importance of the theoretical derivations, though, I consider this a very minor point, which can be tackled in subsequent work.\n\nFigure 1 trajectories are not quite as smooth as I had expected. Any reason for this? The task, particularly between bounces, to predict a linear trajectory should be very simple. \n\nI do not really see how Theorems 1 and 2 are actual theorem. Don’t they just define the log likelihoods of the ground truth states / observations? \n\nTheorem 2 – is the second summand starting with factor $(1-o_t^{(k)})$ necessary? \n\nLine 260: should read \"in theorem 3\", no?" }, { "confidence": 3, "rating": 6, "review_id": "GGvHDDuBZe", "review_text": "This paper advances temporal reasoning in dynamic, high-dimensional, noisy environments by introducing a novel architecture for latent variable state space models. The architecture permits efficient Bayesian inference with nonlinear transitions and emissions. Experiments are performed on toy datasets and a simple real-world dataset for state estimation and missing data imputation, showing that it beats benchmarks relative to competing models like RNNs, autoregressive models, and latent variable approaches.\n\nClear exposition of model architecture and inference algorithm. Theoretical analysis in Section 6.\n\nI think one thing that could really strengthen this paper is showing an experiment on a more challenging data set / problem. The first two experiments are on toy problems.\n\nI think another thing is to explain more clearly how this architecture is differentiated from others, i.e. the technical novelty. E.g. what is the relation of your model to other SSMs incorporating RNNs like the Variational RNN (which you benchmark against in the experiments), and what is it about that change that improves inference?\n\nAre there any experiments you could do that show a sequential modeling problem in which inference was previously intractable is now so?" } ]
z4FaPUslma
Guiding Neural Collapse: Optimising Towards the Nearest Simplex Equiangular Tight Frame
Neural Collapse (NC) is a recently observed phenomenon in neural networks that characterises the solution space of the final classifier layer when trained until zero training loss. Specifically, NC suggests that the final classifier layer converges to a Simplex Equiangular Tight Frame (ETF), which maximally separates the weights corresponding to each class. By duality, the penultimate layer feature means also converge to the same simplex ETF. Since this simple symmetric structure is optimal, our idea is to utilise this property to improve convergence speed. Specifically, we introduce the notion of \textit{nearest simplex ETF geometry} for the penultimate layer features at any given training iteration, by formulating it as a Riemannian optimisation. Then, at each iteration, the classifier weights are implicitly set to the nearest simplex ETF by solving this inner-optimisation, which is encapsulated within a declarative node to allow backpropagation. Our experiments on synthetic and real-world architectures on classification tasks demonstrate that our approach accelerates convergence and enhances training stability.
https://openreview.net/pdf/ac02c11fa162633bf19fadb27beddf13e3c58e97.pdf
[ { "confidence": 4, "rating": 6, "review_id": "ANjQlYoi5d", "review_text": "The paper uses Riemannian optimization to guide the final layer weights (the linear classifier) toward the nearest simplex ETF orientation. In particular, consider the two common approaches of training a deep classifier network:\n\n1. The standard training strategy where the final layer weights are updated by backpropagation.\n\n2. The final layer weights are fixed as a simplex ETF (which has been well-studied in previous works).\n\nThe proposed approach leverages the duality between penultimate layer features and the final layer weights (to form a simplex ETF orientation) and gradually guides the latter to an optimal simplex ETF per training step.\n\n1. The proposed approach frames the gradual transition of weights to a simplex ETF as a Riemannian optimization problem, which can be differentiated. Thus, allowing for an end-end training pipeline. The combination of these techniques is novel to the neural collapse setting.\n\n2. The experimental results are presented for the simple UFMs as well as practical networks and datasets to showcase the convergence benefits.\n\nThe authors do not provide numerical data for the extra memory and step-time that is required by the extra deep declarative layer. A brief discussion is presented in Section 5 but I believe further details would strengthen the paper. For instance:\n- By what percentage does the step time and memory increase when adding this layer?\n- When should one avoid the backward pass through this layer and consider only the forward pass?\n- What is the dependence on the memory and step time growth with the feature dimension and the number of classes? Maybe a simple UFM-based analysis should suffice.\n\nsee more questions below.\n\n1. How effective is the proposed approach in settings with imbalanced classes [1] ? More generally, for settings where the simplex ETF might not be an ideal configuration (for instance: graph neural networks, see [2] ). A brief discussion on these topics can further strengthen the paper.\n\n2. Instead of running the optimization layer to select the final layer weights at every step, what if we do it after every $k$ step? Can we potentially reduce the majority of the computational overheads while improving the convergence?\n\n3. What is the convergence behavior when employing SGD/Adam instead of the AGD approach?\n\nnit: Where is $U_{init}$ defined?\n\nnit: line 117, below eq (5), is the formulation of $\\widetilde{H}$ correct? shouldn't the denominator be $||\\overline{H}||_F$ ?\n\nReferences\n\n[1] Fang, C., He, H., Long, Q., & Su, W. J. (2021). Exploring deep neural networks via layer-peeled model: Minority collapse in imbalanced training. Proceedings of the National Academy of Sciences\n\n[2] Kothapalli, Vignesh, Tom Tirer, and Joan Bruna. \"A neural collapse perspective on feature evolution in graph neural networks.\" Advances in Neural Information Processing Systems 36 (2024)." }, { "confidence": 3, "rating": 6, "review_id": "3AlJHSCT6i", "review_text": "This paper proposed a novel algorithm for neural network training. The algorithm is motivated by the recent discovery on the neural collapse phenomenon, which demonstrates that the last layer of neural network classifier will converge to a specific structure named simplex ETF. The authors propose to guide the network parameters to the ETF structure via explicitly penalizing on the distance to the ETF, and further address the non-uniqueness of the solution via adding a proximal term. Experimental results on various neural network architecture and real world datasets are presented, and the proposed algorithm can universally improve the training and testing accuracy over the standard training.\n\nThe proposed algorithm is novel and well motivated, and it shows universal and significant improvement over multiple choices of network architecture and datasets. The contribution of this work is solid, it helps the community to understand the benefit of the neural collapse phenomenon, and can potentially improve the standard paradigm of neural network training.\n\n1. The presentation should be improved, see questions for detail. In general the authors should give more detailed information about how the algorithm is implemented.\n\n2. Although the accuracy on train and test dataset exhibits significant improvement within the fixed number of training epochs, the proposed algorithm are much more complicated to compute. Therefore it makes more sense to compare the running time and computational cost with Standard and Fixed ETF.\n\n3. Proper ablation study is missing. The authors add many additional techniques, such as exponential moving average, stratified batch sampling, deep declarative layer to improve the training. It is not clear how much the improvement indeed comes from the nearest ETF optimization.\n\n1. How is equation 8 being optimized? Is it using Lagrangian multiplier method? A pseudo code of the proposed algorithm will be very helpful.\n\n2. I found the Proposition 1 hard to follow. The authors should explain what is the implication of this proposition and how does it helps to the stability of training. The current statement is confusing to distinguish the main result, and notations such as $D_y$ and $\\Lambda$ are not properly introduced. This proposition should be improved.\n\n3. In Table 1 and 2, fixed ETF on CIFAR10 with VGG seems to have much worse performance than others. Do you have insights about what is going on here?" }, { "confidence": 3, "rating": 6, "review_id": "6xNVWbkaAL", "review_text": "One of the key aspects of neural collapse (NC) is that the penultimate class feature means form a simplex Equiangular Tight Frame (ETF). The main idea of this paper is to leverage this insight and improve training by further encouraging this property during training. The authors suggest doing this by solving a Riemannian optimization at a given iteration. The way it works is that the classifier weights are set to the nearest simplex ETF obtained by solving this inner Riemannian optimization problem. The classifier weights are dynamically updated during training using this Riemannian opitmization problem at each iteration (rather than trained using gradient descent) using a \"deep declarative node\" this allows gradients to propagate through the Riemannian optimization. \n\nThey show that this approach indeed speeds up training convergence and improves training stability. Their experiments include both synthetic and real-world classification tasks and architectures. \n\nOverall the authors present a nice idea and it is a well-written paper. However, there are a few issues related to the experiments that I outline below. \n\nFrom my viewpoint, the value of this paper and their method (to me) is less the improved test accuracy and more the improved stability and speed of convergence. It's important to note that this speed up also comes at an additional cost (i.e. in performing the Riemannian optimization). Therefore, the improvements to stability or speed of convergences should be weighted against this caveat. I think it would help to highlight this tradeoff more upfront and make that more clear/transparent.\n\nThis is a thoughtful and well-written paper. The authors suggest a nice idea to leverage this insight of NC in deep learning and their approach has clear benefits. It is a nice idea and very well executed. \n\nThere are clear improvements to the current methods; e.g., their improvement upon [74] by solving the inner Riemannian optimization instead of requiring the model backbone to do the work of matching to a chosen fixed ETF.\n\nThe theory and the idea is very compelling. The implementation is good and well explained. Beyond the theory and the novelty of the idea, the main strength of the paper is the value added wrt convergence speed in terms of the number of epochs required for the network to converge.\n\nGood work.\n\nThe main points of concern for me are in regards to the experiments and how the results are reported in the paper.\n\nTable 2 looks good but is a bit misleading particularly when comparing the ranges of the test top-1 accuracy.\nThe results are still interesting but it's not such a strong/clear winner; that is, when looking at the ranges, it's not so obvious. The authors point this out and clarify that the advantages are speed to convergence and decreased variability which I agree are definite plusses.\n\nThe test top-1 accuracies reported in Table 2 aren't competitive with what can be obtained on these benchmark datasets, particularly for the Resnet models. For example, looking at 200 epochs or training, STL on ResNet50 should be able to achieve 85-90% test accuracy, even for Resnet18 the test top-1 accuracy for STL should be upwards of 75%. Similarly, for CIFAR100 on Resnet50, the test accuracies aren't competitive. It'd be interesting to see if these claims about variability still hold when giving the baselines adequate chance to be competitive.\n\nFor Figure 4, also no error bars. Understanding compute restraints, it would be nice to see similar multiple seed runs for ImageNet experiments. \n\nFinally, one thing that is not reported here is an estimate of compute cost. Their method requires additional compute for each iteration. Perhaps when compared on this axis their implicit ETF and the Standard training method would be more fairly compared. \nThe authors do mention this in the limitations section.\n\nHow do you know that the ETF that you steer towards via this Riemannian optimization process is better than the one that you would have arrived at naturally? You say \"this process [provides] the algorithm with a starting point that is closer to an optimal solution rather than requiring it to learn a simplex ETF or converge towards an arbitrary one\". How do you know that this is optimal? Optimal in what context? If I understand correctly, it's just the solution of the Riemannian optimization which means it forces the class means into an ETF. It's optimal wrt to the optimizaiton problem but not necessarily for the learning task? Is that correct?\n\nDo you do any, or is it possible to perform a comparison of these two resulting ETFs?\nHow does the test accuracy of your 'encouraged' ETF compare to the one you would have obtained naturally?\n\nIn Section 3.3. The Proximal Problem. I just don't see immediately why adding the proximal term guarantees uniqueness of the solution and how it stabilizes the Riemannian optimization problem. Can you add more detail or proof or reference to proof? \n\nOn first reading, it was unclear to me exactly how is U_prox defined? And what is used in practice. Is it determined from the previous iteration? If I understand correctly, you tried two approaches: setting U_init = U_prox = canonical ETF. Or to set both equal to random orthogonal matrices from classical compact groups.\nIt sounds like, in the end, you run training without the proximal matrix for one epoch. Then use the resulting U* to set U_init = U_prox = U* from that one epoch. Is that correct? How was this \"warmup\" approach validated? Did you experiment with various epochs? How stable were the outcomes of that analysis? You later mention (line 225) that the correct choice of these values is \"crucial\" so it seems important to understand.\n\nIn the Section Hyperparameter Selection and Riemannian Initialization Schemes: You mention that algorithm convergence is robust to values of \\delta and that the \\delta reg term is a trade-off between the optimal solution's proximity to the feature means and its proximity to the given simplex ETF direction. Did you explore how and when to introduce this constraint? Or any exploration of how the solution varies with \\delta?\n\nIn section 3.4 General learning Setting: The role of the temperature \\tau is bit unclear to me. And the reference to [67, Theorem 1] isn't very helpful. Perhaps a little more clarity as to the role \\tau plays here? You state later in the Experiments section that you use \\tau=5 according to Yaras et al [67]. This hyperparam choice is not very clear to me.\n\n(typo? clarification?) Proposition 1. There is notation discrepancy between what is stated in the Proposition and what is derived in the Appendix B. Namely, the Proposition is stated wrt \\bar{H} but the derivation is carried out for \\tilde{H}. I understand that \\tilde{H} is the normalized (wrt Frobenius) matrix \\bar{H} so perhaps it all works out with the normalization constant but the discrepancy there and comparing back with dimensionality of matrices in the original statement of Proposition 4.5 in Gould et al. [21] (from which this result follows) had me a bit confused. \n\nAre there error bars in Figure 2? I see them for plot (f) but not for the others?\n(clarification) What is depicted in Figure 2(c)? What is \\mathcal{P}_{CM}? I think I somehow missed that.\n\nAre there error bars in Figure 3? Were multiple trials run for these experiments?\n\nTables 1 and 2: The ranges for train and test top-1 accuracy values for STL on VGG seem very large. \n\nIn regards to Figure 4, I'd recommend either performing more training runs for Imagenet on Resnet50. The results look very compelling but without error bars don't say much. Similarly, comparing the results in Figure 4 with those for the other real-world datasets (e.g. Cifar10, Cifar100, STL) those contained in the Appendix which do have error bars are arguably less convincing of the primarily claims of speed to convergence." }, { "confidence": 4, "rating": 5, "review_id": "HwkVpLMWoC", "review_text": "This paper presents a novel approach to utilizing ETF geometry. Instead of fixing the weights or making them learnable, their approach dynamically adjusts the weights by solving a Riemannian optimization problem while allowing end-to-end training. They show that their approach outperforms both the fixed and learnable approaches in terms of convergence speed and generalization.\n\nOriginality:\nThe idea of dynamically adjusting weights is not new, but in the context of neural collapse (NC), it is a natural extension. Fully learnable weights do not provide the ETF structure, and fixed weights are too restrictive. The proposed approach is a good compromise between the two and combines the best of both worlds.\n\nQuality:\nThe paper is well-written, and the proposed approach is carefully supported by theorems and experiments.\n\nClarity:\nThe paper is well-written and easy to follow.\n\nSignificance:\nTheir approach is general and could be applied to a range of problems. The authors applied it to synthetic UFMs and some standard image benchmarks (CIFAR-10, CIFAR-100, STL-10, ImageNet). The authors plan to release code upon acceptance.\n\nOverhead Cost:\nThe proposed method computes the exponential moving average of the feature means, performs a Riemannian optimization, and computes the gradient of DDN. These components introduce overhead in terms of epoch time. The authors claimed in the paper that the gradient of DDN is not computed, and the Riemannian optimization overhead is negligible. This unsupported claim should be backed up by an additional experiment that reports these extra computation times.\n\nStandard Procedure:\n\"To ensure fair method comparison,\" the authors include classifier weight normalization and feature normalization for the standard procedure. This is usually not the case when using CE loss (see Fig 2). The authors should justify this choice by providing the results without these normalizations for the standard procedure.\n\nImage Baselines Results are not SOTA:\nThe reported results are not state-of-the-art. For example, ResNet-18 trained on CIFAR-10 only reaches 80.47%. It seems that these baselines are not well-tuned, and the gain of the proposed approach is not clear and could potentially fade away with a better-tuned baseline. Can the authors comment on this? Additionally, the authors should include the results using ResNet-50 on ImageNet, which should provide a stronger reference point.\n\nFixed ETF Procedure:\nThe authors only used the canonical simplex ETF for the fixed procedure. The weight matrix results in many zeros and could lead to poor performance when used as the fixed classifier because some neurons will be inactive. The authors should include the results using the fixed ETF with a non-canonical (i.e., projection on a random basis).\n\nRemarks:\nThe authors should directly clarify in Tables 1 and 2 the ResNet architecture used (18 or 50).\n\nSee weaknesses section." } ]
z2739hYuR3
Provably Efficient Reinforcement Learning with Multinomial Logit Function Approximation
We study a new class of MDPs that employs multinomial logit (MNL) function approximation to ensure valid probability distributions over the state space. Despite its significant benefits, incorporating the non-linear function raises substantial challenges in both *statistical* and *computational* efficiency. The best-known result of Hwang and Oh [2023] has achieved an $\widetilde{\mathcal{O}}(\kappa^{-1}dH^2\sqrt{K})$ regret upper bound, where $\kappa$ is a problem-dependent quantity, $d$ is the feature dimension, $H$ is the episode length, and $K$ is the number of episodes. However, we observe that $\kappa^{-1}$ exhibits polynomial dependence on the number of reachable states, which can be as large as the state space size in the worst case and thus undermines the motivation for function approximation. Additionally, their method requires storing all historical data and the time complexity scales linearly with the episode count, which is computationally expensive. In this work, we propose a statistically efficient algorithm that achieves a regret of $\widetilde{\mathcal{O}}(dH^2\sqrt{K} + \kappa^{-1}d^2H^2)$, eliminating the dependence on $\kappa^{-1}$ in the dominant term for the first time. We then address the computational challenges by introducing an enhanced algorithm that achieves the same regret guarantee but with only constant cost. Finally, we establish the first lower bound for this problem, justifying the optimality of our results in $d$ and $K$.
https://openreview.net/pdf/c78403d8fed222d4a519ebbc82ee3a1bdfa7dc83.pdf
[ { "confidence": 3, "rating": 5, "review_id": "riOWsaWvoD", "review_text": "This paper considers MDPs employing the MNL function for transition probability, following Hwang and Oh [2023]. The authors suggest efficient algorithms based on online Newton steps, inspired by [Hazan et al., 2014; Zhang et al., 2016; Oh and Iyengar, 2021]. Furthermore, to improve $\\kappa$ dependency, they provide algorithms employing local learning with mirror descent inspired by [Zhang and Sugiyama, 2023; Lee and Oh, 2024]. The algorithms achieve $1/\\sqrt{\\kappa}$ or even detach the dependency of $\\kappa$ from the leading term.\n\nThe suggested algorithms are computationally efficient and show improvement in $\\kappa$ compared to the previous work of Hwang and Oh [2023].\n\n- Their suggested algorithms do not seem novel because they are based on previously proposed methods for logistic or MNL bandits. Specifically, the online Newton update is widely studied for MNL or logistic bandits [Oh and Iyengar, 2021; Zhang and Sugiyama, 2023].\n\n- Furthermore, the improvement on $\\kappa$ is based on the mirror descent algorithm proposed in [Zhang and Sugiyama, 2023; Lee and Oh, 2024], and the proofs seem to follow the steps in [Zhang and Sugiyama, 2023; Lee and Oh, 2024] in the appendix.\n\n- Lastly, the MNL model for transition probability may have an inherent weakness: the number of achievable states for each (k,n) must be finite, and it is required to know the state space of $S_{k,n}$.\n\n[1] Faury, Louis, et al. \"Jointly efficient and optimal algorithms for logistic bandits.\" International Conference on Artificial Intelligence and Statistics. PMLR, 2022.\n\nAre there any non-trivial technical novelties in utilizing the online mirror descent method for MDPs?" }, { "confidence": 3, "rating": 6, "review_id": "GNigoQy4yk", "review_text": "In this paper, the author analyzes a Markov Decision Process (MDP) model with non-linear function approximation. Specifically, in the finite-time horizon inhomogeneous episodic MDPs setting, the transition dynamics are unknown but the reward function is known. The author proposes using a multinomial logit (MNL) function approximation to estimate transition dynamics, which is superior to the linear function approximation if the model is misspecified in \\cite{hwang2023model}. Additionally, the author proposes *UCRL-MBL-OL*, which adapts the previous work that is model-based and has large computational and storage complexity, to an online style that only consumes constant computation and storage resources. Moreover, the author has proven that the regret bound of *UCRL-MBL-OL* matches the state-of-the-art in Theorem 1. Its regret bound achieves $\\tilde{O}(\\kappa^{-1} dH^2\\sqrt{K})$, where $H$ is the time horizon length, $K$ is the number of total episodes and $\\kappa$ is considered as a parameter to control the sparsity of the transition dynamics and $d$ is the hidden dimensionality. Ignoring the logarithmic factor and $\\kappa$, such a regret bound has only a $\\sqrt{H}$ gap compared to the lower bound. After that, with additional assumption, the author utilizes the local information to propose another two algorithms, *UCRL-MNL-LL* and *UCRL-MNL-LL+* to remove the dependence on $\\kappa$ and get a tighter regret bound as well as maintain good properties of *UCRL-MNL-OL*.\n\n1. This paper is well-written. The author makes a clear improvement point compared to the literature.\n\n2. The algorithm proposed by the author enjoys an online learning style that does not need to maintain a large historical set.\n\n1. Although this paper focuses on reducing the computation complexity, I am curious about the sample complexity of *UCRL-MNL-OL*.\n \n2. Since the algorithm builds up the estimation of the transition dynamics by using MNL function approximation, is it considered a model-based algorithm? More specifically, does it require storing the transition dynamics for each state-action pair in every step?\n\nPlease see the above \"Weaknesses\"" }, { "confidence": 4, "rating": 6, "review_id": "kJTFmaWugn", "review_text": "This work studies the MNL function approximation inhomogeneous RL, achieves the $O(1)$ computation cost, and improves the regret guarantee with regard to $\\kappa$. To improve the computation cost, this work employs the online Newton step instead of MLE estimation to estimate $\\theta$. Then, they design a novel confidence set by making full use of local information to improve the dependence of $\\kappa$.\n\n1.\tThe use of local information instead of a uniform $\\kappa$ is novel and useful to improve the dependence of $\\kappa$.\n2.\tThe UCRL-MNL-LL+ removes the $\\kappa$ dependence on the lower-order term and almost matches the optimal regret results by using high-order Taylor expansion.\n\n1.\t[1] also use the online Newton step to improve the computation cost in the logit contextual bandits setting. It would be better to discuss the novelty of UCRL-MNL-OL.\n\n[1] Oh, M. H., & Iyengar, G. (2021, May). Multinomial logit contextual bandits: Provable optimality and practicality. In Proceedings of the AAAI conference on artificial intelligence (Vol. 35, No. 10, pp. 9205-9213).\n\nQuestion 1: This work achieves great results in the stochastic reward setting. Can you discuss the challenge when extending to the adversarial reward setting?" }, { "confidence": 3, "rating": 6, "review_id": "b3DZaxmgWN", "review_text": "The problem considered in this paper is online learning in MDPs where transition probabilities are modelled with a log-linear model (with \"multinomial logit function approximation\"). The finite horizon, time-inhomogenous setting is considered. The problem is motivated by allowing a nonlinear transformation in modeling the MDP and yet maintaining both computational and information theoretic tractability. Inspired by results in the analogous bandit problems and algorithms developed for them, a number of gradually more complex, but (statistically) better performing algorithms are considered. In particular, while naive approaches give a poor dependence on a problem parameter $\\kappa$ that characterizes the \"strength\" of nonlinearity, by adopting previous ideas to the MDP setting, new algorithms are designed that eliminate this poor dependence. A lower bound is also established, which nearly matches the upper bound (but considers infinite action spaces, while the main paper considers finite action spaces).\n\nThis is a reasonable problem setting; and the approach is also reasonable. It is nice to have a lower bound, even if there is a mismatch between the settings. It is nice to see that ideas that were developed for the bandit setting generalize to the MDP setting.\n\n1. The novelty is limited by that we have seen the same story, same ideas playing out nicely in the closely related bandit setting. \n2. A new parameter, U, the number of next states that are reachable with positive probability in the worst case, appears in the analysis and will appear in the bounds.\n3. It is an unpleasant surprise for the reader to discover this dependence only through carefully reading the paper, rather than being told upfront. It is not good that the opportunity to discuss whether this quantity needs to enter the regret bound, and that this quantity needs to be small for the algorithm to be tractable, is missed.\n4. Line 83 and onward: The work of Uohamma is discussed but is mischaracterized. My reading of this work is that they do establish that their algorithm runs in polynomial time. It remains unclear why the exponential family model is incomparable with the one considered here; an explanation (with examples) is missing.\n5. The paper could use some extra proofreading (e.g., the upper indices in the bottom of page 5, in the displayed equation are not correct); in line 149, in the definition of $U$, $|\\cdot|$ is missing.\n\n1. Can you confirm that the regret and compute cost depend on U, the worst-case number of next states that one can transition to with positive probability? Do you think such dependencies are necessary? Are there any interesting examples where it is reasonable to expect that U is small, independently of the size of the state space?\n2. What was the most challenging aspect of extending the bandit ideas to the MDP framework?" }, { "confidence": 3, "rating": 6, "review_id": "FKyl2pwdfS", "review_text": "The paper studies the recently proposed MDPs that use multinomial logit function approximation for state distribution validness. The results and algorithms improve the prior work of Hwang and Oh [2023] in multiple aspects, including computation efficiency, storage, and statistical dependence on the problem-dependent quantity $\\kappa$ that can be exponentially small. In addition, the authors establish a matching lower bound on $d$, the feature space dimension, and $K$, the number of episodes.\n\n- The paper is well-written and has clear logic flows. Readers can see how the authors approach the MDP problem and tackle the challenges. In particular, Table 1 is quite useful for demonstrating the advancements in the work.\n- The improvements in both computation and storage efficiencies are essential for practical applications. In Theorem 2, the authors also improve the dependence on $kappa$ to $\\sqrt{\\kappa}$ without affecting efficiency. The enhancement seems significant, especially since the parameter can be exponentially small. \n- The lower bound established in the paper is the first to demonstrate the optimality of the authors' algorithms in the $d$-$K$ dependence. Per my understanding, it also confirms the results' optimality of Hwang and Oh [2023].\n\n- The primary high-level techniques and tools (seem to) come from existing works and relevant fields, such as MNL contextual bandits. The authors should put more effort into highlighting the technical challenges and novelties besides the previous comparisons. \n- It would be beneficial to include experiments on synthetic and real-world datasets and compare the results to existing baselines and relevant works. In particular, the new algorithms seem more involved than prior ones, which may affect their stability and adaptiveness.\n- There is still a significant gap between the lower and upper bounds. Besides, I wonder how often $\\kappa$ could be exponentially small in practical settings, though it's definitely of theoretical interest to approach the lower limits on parameter dependency.\n\nOverall, I think the paper makes reasonable contributions to the problem, and I have no additional questions/comments besides the above." } ]
z1GwaNoGnr
XMask3D: Cross-modal Mask Reasoning for Open Vocabulary 3D Semantic Segmentation
Existing methodologies in open vocabulary 3D semantic segmentation primarily concentrate on establishing a unified feature space encompassing 3D, 2D, and textual modalities. Nevertheless, traditional techniques such as global feature alignment or vision-language model distillation tend to impose only approximate correspondence, struggling notably with delineating fine-grained segmentation boundaries. To address this gap, we propose a more meticulous mask-level alignment between 3D features and the 2D-text embedding space through a cross-modal mask reasoning framework, XMask3D. In our approach, we developed a mask generator based on the denoising UNet from a pre-trained diffusion model, leveraging its capability for precise textual control over dense pixel representations and enhancing the open-world adaptability of the generated masks. We further integrate 3D global features as implicit conditions into the pre-trained 2D denoising UNet, enabling the generation of segmentation masks with additional 3D geometry awareness. Subsequently, the generated 2D masks are employed to align mask-level 3D representations with the vision-language feature space, thereby augmenting the open vocabulary capability of 3D geometry embeddings. Finally, we fuse complementary 2D and 3D mask features, resulting in competitive performance across multiple benchmarks for 3D open vocabulary semantic segmentation. Code is available at https://github.com/wangzy22/XMask3D.
https://openreview.net/pdf/a8f19e2b718d05d788970bd70a67a13e857d6c3f.pdf
[ { "confidence": 3, "rating": 5, "review_id": "PxfWILE7VV", "review_text": "This paper introduces XMask3D, a framework developed for open vocabulary 3D semantic segmentation. They propose the integration of the denoising UNet, derived from a pre-trained diffusion model, to generate geometry-aware segmentation masks conditioned on learnable implicit 3D embeddings. These binary 2D masks are used to filter mask-level embeddings of 3D representations and apply mask regularization, thereby improving the open vocabulary capacity of 3D features.\n\n1.\tThe motivation is clear.\n2.\tThe proposed method is intuitive, and the experiments have validated their contributions.\n\n1.\tThe organization should be improved. Section 3.1 provides an overview, while section 3.2 includes design insights and preliminary findings. The flow of these writings has puzzled me, making it difficult to grasp your key contribution.\n\n1.\tPlease provide further clarification on how mask-level alignment between 3D features and 2D embedding space can address the limitations of traditional techniques, such as global feature alignment or vision-language model distillation. Additionally, if texts (Category Labels) are concatenated with fused features, will it still create a unified feature space that encompasses 3D, 2D, and textual modalities?\n2.\tCould you please provide further clarification on the main contributions of your research compared to PLA and OpenScene? Although the 3D caption process shares similarities with PLA, the overall pipeline resembles OpenScene, with the exception of the diffusion model and mask generator, which differ from the Multi-view Feature Fusion in OpenScene.\n3.\tDoes the Implicit 3D Captioner effectively work with your 3D features? From my understanding, the most reliable 3D captioner currently available is Cap3D, which generates captions for 3D objects by rendering multi-view images and utilizing BLIP2 and LLM for assistance. In the context of indoor-scenes, can we consider the Implicit 3D Captioner to be equally robust? It would be beneficial to present additional evidence to support this claim.\n4.\tCan your text-to-image diffusion model effectively generalize to your datasets? If not, please provide examples of failure cases. Additionally, is the diffusion model fine-tuned during the training process or is it frozen? If not, please present additional results to demonstrate the robustness of your diffusion model in generating high-quality images within your datasets.\n5.\tWhat is view-level contrastive loss? Why this loss is calculated between the view global feature and text embedding of the view image caption but have three coefficients?\n6.\tIt is recommended to show your 2D Mask and 3D Mask in Figure 3 to provide more visual evidence.\n7.\tThe authors should provide results on ScanNet++ (CVPR’23), which is a up-to-date dataset compared with ScanNet.\n8.\tSince diffusion models are utilized, it is recommended to compared the model parameters and FLOPs compared with PLA and OpenScene." }, { "confidence": 4, "rating": 5, "review_id": "m2utdEThw3", "review_text": "The paper proposes a precise and consistent mask-level alignment between 3D features and the 2D-text embedding space through a method called cross-modal mask reasoning. The proposed XMask3D model includes a 3D branch for capturing geometric features, a 2D branch for generating vision-language aligned masks, and a fusion block to combine 3D with 2D. Using a pre-trained text-to-image diffusion model as the 2D mask generator, the model leverages three techniques: 3D-to-2D mask generation, 2D-to-3D mask regularization, and 3D-2D mask feature fusion.\n\n1- The idea is novel, the author propose to merge 2D which provides high OV capabilities, with 3D features shich endoces 3D geometry. \n\n2- The method performs remarkably better than the reported models, namely OpenScene. The experiments are also well structure\n\n1- The authors don't compare with state-of-the-art 3D semantic segmentation OV3D[1]\n\n2- The authors highlighed fututre work in the limitation, it would be good if you can expand it with some limitation on the technical side or some failure cases.\n\n[1] Jiang, Li, Shaoshuai Shi, and Bernt Schiele. \"Open-Vocabulary 3D Semantic Segmentation with Foundation Models.\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024.\n\nPlease compare to OV3D mentioned in the weaknesses" }, { "confidence": 4, "rating": 7, "review_id": "I6EPZ3IczQ", "review_text": "The paper addresses the limitations of current open vocabulary 3D semantic segmentation methods, which primarily focus on creating a unified feature space for 3D, 2D, and textual modalities but struggle with fine-grained segmentation boundaries. To overcome these limitations, the authors propose XMask3D, a cross-modal mask reasoning framework that achieves more precise mask-level alignment between 3D features and the 2D-text embedding space.\n\n1. The part \"incorporating a 2D mask generator to create geometry-aware open masks and apply fine-grained mask-level regularization on 3D features\" seems reasonable and novel.\n2. The paper is well-structured and easy to follow.\n3. Analysis is thorough and insightful.\n\n1. The paper evaluates the proposed method on a limited set of benchmarks (ScanNet20, ScanNet200, S3DIS), all of which are indoor scene datasets. Authors could discuss how the method might perform on outdoor datasets. Additionally, the authors could provide a qualitative analysis of the model's potential limitations when applied to different environments.\n2. The reliance on the denoising UNet from a pre-trained diffusion model could be seen as a potential weakness or limitation, especially given the computational resources required for training and inference.\n\n1. The paper could benefit from a more detailed error analysis to understand the failure modes of XMask3D, especially in novel category segmentation." }, { "confidence": 4, "rating": 6, "review_id": "UeCtgjNXVU", "review_text": "This paper addresses the challenge of open-vocabulary 3D semantic segmentation by utilizing 3D geometric features, 2D semantic embeddings, and text modality. The proposed approach adapts the ODISE method to the 3D domain, aiming to distill open-vocabulary semantic segmentation knowledge from a pre-trained text-to-image denoising diffusion model to a 3D segmentation model. Initially, an input point cloud is fed into a 3D encoder-decoder segmentation network, producing point-wise geometric features. Simultaneously, a pre-trained visual-language diffusion model generates 2D masks and embeddings from posed images of the same scene, conditioned on the 3D global feature of the 3D branch’s encoder. Unlike the ODISE method, an implicit $3D$ captioner is introduced to produce geometry-aware 2D masks while also distilling information from the 2D branch network to the 3D encoder. To further regularize the 3D network, a distillation loss ($\\mathcal{L}_{mask}$) is applied to the 3D mask embeddings, derived from the per-point features and the 2D masks back-projected to the point cloud as 3D binary masks. By obtaining ground truth mask features from a pre-trained CLIP model, the 3D masked embeddings are aligned with the image-text joint embedding space, through a cosine similarity loss. This alignment leads to more coherent segmentation results and enhances the model's open-vocabulary capabilities. Finally, the per-point features are combined with the pseudo mask 2D features (formed by the back-projected 3D mask and 2D mask embeddings), resulting in a fused per-point representation that incorporates the geometric information from the 3D segmentation network and the semantic open-vocabulary capabilities of the 2D branch. The approach is evaluated on three semantic segmentation benchmarks (ScanNet, ScanNet200, and S3DIS) and demonstrates superior performance compared to competing methods.\n\nThe XMask3D effectively aligns 3D geometric features with 2D and textual modailities through knowledge distillation from visual-text joint embedding spaces inherent in the pre-trained 2D denoising UNet and the CLIP model. As evident by the ablation, the implicit 3D captioner is a crucial step in the overall pipeline, and it outperforms vanilla text conditioning or the implicit 2D captioner of ODISE, in both base and novel semantic categories. Moreover, the 2D-to-3D mask regularization is also essential, since it significantly improves the accuracy of the proposed method esp. in novel categories. This justifies the need for this additional distillation step from the CLIP joint space, to further enhance the open-vocabulary capabilities of the XMask3D method. Finally, the discussion on modality fusion, both in the main paper and supplementary, is highly appreciated. By dissecting the method and providing qualitative and quantitative results for each step, the authors make it easier for readers to understand and gain intuition about the presented approach.\n\nWhile the method exhibits superior performance w.r.t. competing methods, it seems that the output fused embeddings yields to geometric inconsistent features for semantic classes that cover large areas of the point cloud such as wall, ceiling and floor. This is evident in both partitioning settings when the class is either base or novel (Table 5 (a) and (b) in supp.).\n\nFollowing the weaknesses section, do the authors have any additional insights into why this phenomenon occurs with the fused embeddings? Could the 2D regularization terms ($\\mathcal{L}_{seg}^{2D}$, $\\mathcal{L}\\_{view}^{2D}$) be introducing too much bias towards the 2D visual modality, thereby causing geometric discontinuities in the output fused features?" } ]
z0I2SbjN0R
DiffusionPDE: Generative PDE-Solving under Partial Observation
We introduce a general framework for solving partial differential equations (PDEs) using generative diffusion models. In particular, we focus on the scenarios where we do not have the full knowledge of the scene necessary to apply classical solvers. Most existing forward or inverse PDE approaches perform poorly when the observations on the data or the underlying coefficients are incomplete, which is a common assumption for real-world measurements. In this work, we propose DiffusionPDE that can simultaneously fill in the missing information and solve a PDE by modeling the joint distribution of the solution and coefficient spaces. We show that the learned generative priors lead to a versatile framework for accurately solving a wide range of PDEs under partial observation, significantly outperforming the state-of-the-art methods for both forward and inverse directions.
https://openreview.net/pdf/d12cbf722d1e7501e11593285562cb5fb783d08a.pdf
[ { "confidence": 4, "rating": 5, "review_id": "2v5ZG9boCi", "review_text": "This paper introduces diffusion methods to tackle the partially observed PDEs, named DiffusionPDE. By learning the joint distribution of solution and coefficient space, the proposed model can handle both forward and inverse problems. The authors experiment with diverse PDEs and settings to demonstrate the model's effectiveness.\n\n-\tThis paper successfully utilizes the diffusion methods in solving PDEs, covering both forward and inverse problems.\n\n-\tThe main text and supplementary materials provide diverse experiment settings, which can well support the model’s effectiveness on partial observations.\n\n-\tThis paper is overall clear and well-written.\n\n1.\tThe technical contribution is limited.\n\nFrom a technical view, this paper is an application of the diffusion model in PDE solving. There are also some previous methods that also use diffusion methods and leverage the PDE loss [1]. Thus, I think the technical novelty is limited.\n\n[1] A Physics-informed Diffusion Model for High-fidelity Flow Field Reconstruction, JCP 2023\n\n2. Some powerful baselines are missing.\n\n- According to Figure 1, I think the base model of DiffusionPDE is U-Net. How about comparing it with a single U-Net? I think U-Net could be a powerful baseline.\n\n- There are also some latest models that are good at processing partially observed or irregularly placed PDEs, such as OFormer [1] and Transolver [2]. They should include them as baselines.\n\n[1] Transformer for Partial Differential Equations' Operator Learning, TMLR 2023\n\n[2] Transolver: A Fast Transformer Solver for PDEs on General Geometries, ICML 2024\n\n3. Model efficiency comparisons are needed, including GPU memory and running time.\n\n4. I think the proposed model cannot predict the future evolution of a time-dependent PDE. Current tasks are all about “reconstruction” or “imputation”.\n\n1.\tFigure 4, do both forward and inverse tasks use the same diffusion model? Or do we need to train two models for these two different tasks?\n\n2.\tI think the base model is U-Net. So how does DiffusionPDE handle the spatially scattered partial observations? Is the input still in the regular grid, but only the sampled locations have ground-truth values?" }, { "confidence": 4, "rating": 6, "review_id": "AXEQgaiI3m", "review_text": "The paper proposes to solve PDEs given only sparse measurements by jointly modeling the solution and coefficient space (e.g. the initial conditions) using a diffusion model. By applying diffusion posterior sampling (DPS) the authors obtain samples that are consistent with the sparse measurements and the underlying PDE equations. Several experiments show superior performance of the method compared to standard baselines such as PINNs and FNOs.\n\n- Solving PDEs under partial observation is an important problem in real-world applications\n- The proposed method is technically sound and improves upon existing baseline methods (PINN, FNO) that do not work well for sparse measurements\n- Leveraging a pretrained diffusion model as a generative prior to model the joint distribution of solution and coefficient space is a good idea\n- The presentation of the method is clear and supported by concise algorithms and equations. The paper is well written overall\n- Experiments consider standard baseline methods for PDEs and cover a sufficient range of different dynamics\n\n----\n\nPost-rebuttal: the authors have addressed quite a few of the initial concerns, and while some concerns (e.g. about the magnitude of the contributions remain), I'd be happy to support an accept. I've raised my score accordingly.\n\n- The main weakness of the method is the limited novelty. Both sparse measurements and physics-based losses have been considered together with diffusion models, see e.g. Shu et al. (2023). So it seems to me that the main technical novelty is to apply diffusion models to model the joint distribution of two simulation states at different points in time and apply DPS during inference for consistency with the sparse measurements and PDE constraints. \n- The experiments do not take into account any stochasticity or uncertainty. In principle, DPS will give a distribution of solutions, which is not the case for the other baseline methods, but this is not explored further in the paper. \n- Since the joint distribution models two states at time 0 and time T (for all experiments except Burgers' equation) and $0 \\ll T$, the authors need to simplify the PDE loss $\\mathcal{L}_{pde}$ to drop any time derivatives. This is a serious limitation. \n- It is not clear if DPS works better than classifier-free guidance, as used e.g. in Shu et al. (2023), or other methods for solving inverse problems with diffusion models. \n- DPS requires a lot of compute during inference for calculating $\\mathcal{L}_{pde}$. For a fair comparison, it would be important to show the number of parameters, training time and inference time for all methods.\n\n- Algorithm 1 shows an adaption of DPS to EDM (Karras et al. 2022). Is this adaptation novel? Can the authors give some intuition why they apply the DPS losses in line 12 and 13 to the 2nd order correction (line 8) and not apply any trapezoidal rules in this case? \n- Are sparse measurements located on a grid that matches the resolution of the diffusion model or do they have continuous coordinates? In the second case, how are they interpolated to match the data resolution of the diffusion model? Does that make classifier-free guidance difficult to apply?\n- As noted in the weaknesses: why not use classifier-free guidance? I would like to see a discussion of different methods for inverse problems and diffusion models that can be used here instead of DPS and what are the advantages of using DPS. Reconstructing the solution/coefficient space from sparse measurements alone is a linear inverse problem with a number of different methods that can be used (e.g. Denoising Diffusion Restoration Models; Kawar et al. 2022, among many others) which oftentimes have much nicer theoretical guarantees/higher quality reconstructions and faster sampling speed. When considering these methods, is adding the PDE loss $\\mathcal{L}_{pde}$ and thus making the problem a non-linear inverse problem really beneficial?\n- Likewise, as mentioned above: what are the parameter counts and runtimes of the method and the baselines?" }, { "confidence": 3, "rating": 4, "review_id": "C9OuJt8BIo", "review_text": "The work uses a guided diffusion process to solve the PDE forward and inverse problems with partial observations. Instead of learning the parameter-to-solution map ($a\\rightarrow u$) as in Neural Operators, the method learns the diffusion process on the joint distribution $(a,u)$, and use guided diffusion for inference under sparse observations. Compared with several baseline method, the proposed method shows improved performance for solving forward and inverse problem with sparse observations.\n\nThe work uses a guided diffusion process to solve the PDE forward and inverse problems with partial observations.The authors compare with several baseline methods. The idea is clearly presented, and might be useful for the community.\n\nThe paper presents an interesting approach to solving PDE forward or inverse problems with sparse observations, which is an appealing concept given the minimal data requirement. However, this approach raises some concerns about the well-posedness of the problem. For example, in forward problems where sparse observations of the parameter $a(x_i)$ are available, there are infinitely many ways to interpolate $a$ and solve the PDE to obtain $u$. They are all valid solutions that satisfy both the PDE and the observations. This suggests that the method's ability to achieve good recovery might heavily rely on the strong regularization imposed by the training dataset, potentially limiting its practical utility as it may only favor solutions resembling those in the training set.\n\nAdditionally, in Appendix C, Table 2, the weightings for observation and PDE loss are significantly higher (by two to six orders of magnitude) than those for $\\nabla_x \\log(p(x))$ as described in Equation 8, which might indicate a predominance of data fitting over the diffusion process. It would be beneficial if the authors could provide more guidance on how these weights were chosen and discuss the implications of using smaller weights. Understanding the rationale behind these choices could help clarify the model's dependency on these parameters and their impact on the solution's behavior.\n\n(1) The results from the baseline methods (PINO, DeepONet, PINNs, and FNO) are so bad. The claim that these methods are \"not easily extendable\" invites further scrutiny:\n\n(a) All the baseline models are supposed to represent smooth functions. However, in Figure 4, 7, 8, they look discontinuous at the training points.\nAn explanation of how these models were trained and how inference was conducted could clarify why these discrepancies appear.\n\n(b) Taking PINNs as an example in the Darcy flow problem.\nLet $\\hat{a}(x)$ and $\\hat{u}(x)$ be the (potentially noisy) observation at $x$.\nWe can represent $a(x)$ by a neural network, $a_V(x)$ (use neural net for convenience, could be other representations), and the PDE solution $u(x)$ by a neural net $u_W(x)$, where $V$ and $W$ are the weights of the neural network.\nWe can solve the following optimization problem:\n\n$$\\min_{V,W} \\sum_{x\\in T_d} (u_W(x) - \\hat{u}(x))^2 + (a_V(x) - \\hat{a}(x))^2 + \\sum_{x \\in T_r} (\\nabla \\cdot (a_V(x) \\nabla u_W(x)) - q(x))^2$$\n\nwhere $T_d$ is the observational point, and $T_r$ is the residual points, which does not need to be the same as $T_d$.\n\n(c) Similar, for a trained neural operator parametrized by W, $G_W[a] = u$. We can solve the following optimization problem:\n\n$$\\min_V \\sum_{x\\in T_d} (a_V(x) - \\hat{a}(x))^2 + (G\\[a_V\\](x)-\\hat{u}(x))^2$$\n\nwhere $G[a_V]$ is the solution of the PDE with parameter $a_V$.\n\nIt seems that all the baseline methods can be used for forward and inverse problems with sparse observation. \nIt is unclear why the proposed method would offer superior performance compared with the baselines. \n\nIn contexts where full observation are available, as shown in Table 4, one might intuitively expect methods like PINNs—which utilize residual losses to ensure adherence to PDE constraints—or Neural Operators—which establish a direct parameter-to-solution mapping—to deliver more accurate results compared to a method that relies on a diffusion process. This leads to a critical inquiry on why diffusion process gives better accuracy for PDE problems.\n\n(2) How is the PDE loss computed? Is it by finite difference on a regular grid? Detailing this in the main text could help readers assess the accuracy and applicability of the PDE loss in different scenarios." }, { "confidence": 5, "rating": 8, "review_id": "nZRQ0jN4rd", "review_text": "The paper uses score based generative diffusion models to find the forward and backwards solution of a set of PDEs given partial observations of the solution and/or incomplete knowledge of the coefficients. The method performs well, and outperforms other ML methods such as FNO, as well as 'standard' FE type methods, for a range of standard test problems. The method reconstructs This is a novel approach, which delivers good performance, with low errors at a competitive speed. Extensive tests are given, with careful analysis of the results.\n\nThe use of score based generative methods in this context, where both the solution and the parameter estimates are updated, is novel. The method is clearly effective for the problems considered and should have good applications to real world examples. Extensive tests on a series of standard test problems show that the errors of the method are much lower than other ML based methods such as FNO.\n\nThis paper suffers as do many similar papers from a limited range of examples. It concentrates on the usual examples of PDEs such as NS and Bergers, and in both cases of these it looks at problems with quite moderate viscosity, which are realtively east to solve. This is more or less inevitable for such a short paper as this, especially as comparisons are needed with other method. But I would have liked to have seen more novel examples than the usual ones. This is not really a criticism of this paper, but is something to consider for future work. It would be imporoved by a fairer comparison with other methods which work with incomplete data and measurements. A clear exanple of this being the data assimilation widely used in physical modelling for just this range of problems. These should be descibed somewhere in the introduction and in Section 2. (Although of course these latter methods are slow in comparison.) The method is also limited (see later) to looking at certain slices of the solution.\n\n1. How does this method compare with a data assimilation approach\n2. How easy would it be to extend the method to full time intervals \n3. How easy is it to extend the method to higher dimensions\n4. Have the authors tried out the method on more challenging PDE examples.\n5. Also consider tests on NS and Bergers' eqn with much smaller viscosity." } ]
yzviAnpvU6
ReLIZO: Sample Reusable Linear Interpolation-based Zeroth-order Optimization
Gradient estimation is critical in zeroth-order optimization methods, which aims to obtain the descent direction by sampling update directions and querying function evaluations. Extensive research has been conducted including smoothing and linear interpolation. The former methods smooth the objective function, causing a biased gradient estimation, while the latter often enjoys more accurate estimates, at the cost of large amounts of samples and queries at each iteration to update variables. This paper resorts to the linear interpolation strategy and proposes to reduce the complexity of gradient estimation by reusing queries in the prior iterations while maintaining the sample size unchanged. Specifically, we model the gradient estimation as a quadratically constrained linear program problem and manage to derive the analytical solution. It innovatively decouples the required sample size from the variable dimension without extra conditions required, making it able to leverage the queries in the prior iterations. Moreover, part of the intermediate variables that contribute to the gradient estimation can be directly indexed, significantly reducing the computation complexity. Experiments on both simulation functions and real scenarios (black-box adversarial attacks neural architecture search, and parameter-efficient fine-tuning for large language models), show its efficacy and efficiency. Our code is available at https://github.com/Thinklab-SJTU/ReLIZO.git.
https://openreview.net/pdf/fc7ea3437f516b19fb6feff0c372deeda8df7019.pdf
[ { "confidence": 4, "rating": 7, "review_id": "F8CI8pjUts", "review_text": "From my understanding, this paper give a zero-th order algorithm with application to popular vision tasks neural architecture search and black-box adversarial attacks. The authors derive a closed-form solution after modeling the gradient estimation as a quadratically constrained linear program problem. The key idea is to try to decouple the required sample size from the variable dimension without extra conditions required, making it able to leverage the queries in the prior iterations. The speedup is further technically achieved by directly indexing some of the intermediate variables that contribute to the gradient estimation. The theoretical studies are given for its convergence speed and its cost-effectiveness is verified on benchmarks.\n\n1. Clear motivation with clearly derived approach, and it is a new zero-th order algorithm indeed and the authors also contextualize well the proposed method with related work discussion.\n2. Strong empirical performance on representative vision tasks with rich testbeds and settings.\n3. The approach by its design, could enjoy the efficiency of smoothing techniques while maintaining estimation accuracy. Table 4 in the appendix is informative.\n4. The paper gives comprehensive results and technical details in both main paper and appendix.\n\n1. As remarked by the authors, it has few constraints on the sample size, similar to the smoothing techniques; and it requires the estimation of the gradients which involves solving a linear program problem.\n2. As a zero-th order algorithm, it may still not be suited for large-scale application e.g. network training.\n\nI wonder if the proposed method could really facilitate the community of NAS? as zero-order optimization is not common in NAS." }, { "confidence": 3, "rating": 6, "review_id": "p44YTsT8ME", "review_text": "The paper introduces ReLIZO, a novel zeroth-order optimization method leveraging linear interpolation to estimate gradients efficiently. It reduces the complexity of gradient estimation by reusing prior queries without additional conditions on sample size, decoupling it from variable dimension constraints. ReLIZO models gradient estimation as a quadratically constrained linear program, solving it analytically to reduce computation complexity. Experimental results demonstrate ReLIZO's efficacy in various scenarios, including black-box adversarial attacks and neural architecture search, showcasing faster convergence and better solutions compared to existing methods.\n\n* The paper is well-written, with clear and easy-to-follow explanations.\n* The paper introduces a method for estimating gradients using arbitrarily sampled vectors without requiring orthogonal conditions or adherence to a specific distribution, enabling the reuse of queries to accelerate the zeroth-order (ZO) optimization process.\n* Extensive experiments on simulation benchmarks and real-world applications validate the method’s performance.\n* The paper highlights that ReLIZO can be viewed as a generalized version of traditional linear interpolation methods, capable of handling both equal and smaller sample sizes compared to variable dimensions. This demonstrates ReLIZO's theoretical soundness and enhanced flexibility in gradient estimation.\n\n* The effectiveness of reusing queries depends on the choice of the reusable distance bound, which might require fine-tuning for different applications, adding complexity to its implementation.\n* While the method reduces the number of function queries, the process of solving the quadratically constrained linear program might introduce additional computational overhead for large $n$.\n\nIn Figure 5, the results for the ARGTRIGLS problem indicate that any reusable distance bound leads to a performance drop. What is the specific structure of this problem? Does this suggest that the reuse strategy may be ineffective in certain special cases?" }, { "confidence": 3, "rating": 6, "review_id": "BYO12EN68o", "review_text": "This study introduces a novel gradient estimation algorithm that operates solely on forward function evaluations. The method employs a Quadratically Constrained Linear Program (QCLP) to determine the optimal linear approximation of sample vectors. The authors present performance enhancement strategies, including sample reuse and efficient inverse matrix computation within the QCLP framework. Empirical evaluations conducted on black-box adversarial attacks and neural architecture search demonstrate the proposed algorithm's superiority over existing zeroth-order methods.\n\n1. The proposed method is natural. Approximating the gradient using linear combinations of samples and formulating as the QCLP is an intuitive idea, and the auxiliary techniques employed in this study are both judicious and pertinent to the research objectives.\n\n2. The paper is well-written and easy to follow.\n\n1. Zeroth-order gradient estimation has a relatively limited impact. While the proposed zeroth-order gradient estimation method demonstrates superiority over existing algorithms in its class, its overall impact on solving underlying optimization problems may be constrained. This limitation is exemplified in the NAS evaluation, where ReLIZO does not consistently achieve optimal performance.\n\n2. According to my interpretation, in ReLIZO algorithm, obtaining new samples from the input space in each iteration is random and arbitrary. I feel there might be more effective strategies to sample new vectors based on known information. Could the authors comment on this?\n\nSee weakness." } ]
yySpldUsU2
Changing the Training Data Distribution to Reduce Simplicity Bias Improves In-distribution Generalization
Can we modify the training data distribution to encourage the underlying optimization method toward finding solutions with superior generalization performance on in-distribution data? In this work, we approach this question for the first time by comparing the inductive bias of gradient descent (GD) with that of sharpness-aware minimization (SAM). By studying a two-layer CNN, we rigorously prove that SAM learns different features more uniformly, particularly in early epochs. That is, SAM is less susceptible to simplicity bias compared to GD. We also show that examples constraining features that are learned early are separable from the rest based on the model’s output. Based on this observation, we propose a method that (i) clusters examples based on the network output early in training, (ii) identifies a cluster of examples with similar network output, and (iii) upsamples the rest of examples only once to alleviate the simplicity bias. We show empirically that USEFUL effectively improves the generalization performance on the original data distribution when training with various gradient methods, including (S)GD and SAM. Notably, we demonstrate that our method can be combined with SAM variants and existing data augmentation strategies to achieve, to the best of our knowledge, state-of-the-art performance for training ResNet18 on CIFAR10, STL10, CINIC10, Tiny-ImageNet; ResNet34 on CIFAR100; and VGG19 and DenseNet121 on CIFAR10.
https://openreview.net/pdf/b07ede96cc83e38f5a579bbba4e95073247adb82.pdf
[ { "confidence": 2, "rating": 6, "review_id": "gO04npIXWy", "review_text": "It is known that usually deep neural networks will learn “easy examples\" that contain fast-learnable features first while learning more complex examples in a second time. The authors argue that mitigating such simplicity bias is the reason method like SAM are outperforming SGD. Based on such analysis, the authors introduce their methods coined as USEFUL that consists in two setups: 1) Identifying the examples with fast-learnable features using a clustering method based on layer output similarity 2) Upsampling by a constant factor the remaining examples with slow-learning features. By doing so, the authors can significantly increase model performances and training time on different classification tasks using different optimizers. They assess their methods across a wide range of dataset and different hyper-parameters and outperform random clustering baseline.\n\nThis paper is well motivated and written. The method seems to be sounded and I really appreciate that the authors assess their method using different hyper-parameters such as optimizer, batch size, datasets, upsampling factor, architectures, and data augmentation. It is also great that they ran a baseline with random clustering.\n\nIt is not clear when and why one should choose the last output activation vector to define the clustering instead of intermediate activation vector. It is also not clear at which epoch one should decide to do the clustering since for a dataset like CIFAR10 the optimal performances are achieved at epoch 8 while for CIFAR100 it is epoch 20. So, finding the correct hyper-parameters for the clustering might be costly and thus impact how fast convergence can really be (if we consider this needed additional ablation on clustering epoch). In addition, the authors mention that they are using an upscaling factor of 2, but I am wondering how robust this is when using long-tail distribution. For example, I am not sure that on something like ImageNet-LT or Inaturalist, we will get the best performances by using a constant factor. I would also be a bit more cautious about some of the claims made in the papers. For example, the authors claim that their method is generalizing to OOD tasks while providing experiments on only the WaterBird dataset. So, it would be better to write about promising preliminary results than claiming generalization on OOD.\n\n1) Do you think your method will also generalize on Long-Tail dataset while keeping an upscaling factor constant? \n\n2)) Any ideas or heuristics about how to find the optimal epoch/layer to perform the clustering without running an expensive ablation?" }, { "confidence": 3, "rating": 6, "review_id": "nlj9GM1obg", "review_text": "This work aims to modify the training data distribution to improve in-distribution generalization. First, the authors theoretically analyse a 2-layer CNN and compare the feature learning dynamics (fast learnable and slow-learnable features) of Gradient Descent (GD) and Sharpness-Aware Minimization (SAM). It is then shown that SAM mitigates simplicity bias compared to GD. The authors then propose USEFUL (UpSample Early For Uniform Learning), a method that upsamples the examples in the training set that contains slow-learnable features. USEFUL first clusters the examples with similar outputs early in the training and then upsamples the slow-learnable clusters. The main idea behind USEFUL is to learn features at a uniform speed (similar to SAM) by changing the training data distribution. USEFUL can be trained with SGD, SAM and SAM + Trivial Augment. Results on CIFAR-10, CIFAR-100, STL10, TinyImageNet indicate that USEFUL is across datasets and architectures. Additonal ablation and analysis show that USEFUL learns similar properties to SAM (for e.g less sharp solutions).\n\n1. Originality: The question posed by the authors “Can we change the training data distribution such that the model trained on it has similar properties to SAM?” is interesting and novel. The proposed method is also well-motivated.\n2. Results: The authors perform a comprehensive set of ablations and analysis on the proposed method USEFUL. Section 5.4 that shows that USEFUL’s solution has similar properties to SAM, which answers the question raised in the motivation of the paper. I also particularly like the ablations with upweighting loss and data selection method in Appendix D.6.\n3. Overall, the paper is fairly well written. One minor point to address here is that the paper covers multiple concepts like SAM, simplicity bias, flat minima and uniform feature learning. It would be good to explain the relationship between these more clearly.\n\n1. The authors explicitly mention that their focus in this paper is only on “in-distribution generalization”. I am a bit confused by this given the motivation of simplicity bias and learning features uniformly. To elaborate more on this point,\n - Springer et al [1] also show that SAM implicitly balances the quality of diverse features (similar to the observations made in Section 3 of this paper. The experimental results in [1] is focused more on datasets with multiple predictive features like CelebA, CIFAR-MNIST. \n - Past work on simplicity bias and shortcut learning [2, 3, 4, 5] has focused on similar datasets like CelebA, Waterbirds, CIFAR-MNIST, Colored-MNIST to name a few.\n - While the authors have shown encouraging results on Waterbirds dataset in Appendix D5, it would be good to show the complete results on various groups and on other datasets as well.\n2. Connection to [1]. Springer et al [1] made a very similar observation as to Section 3 in this paper. It would be great if the authors can clarify the differences with the observations in [1] and this work. Particularly, [1] also shows that SAM mitigates simplicity bias and that SAM learns higher quality representations of hard-to-learn features. The authors briefly discuss this in Related Works section but a more detailed answer would be helpful.\n3. I just wanted to understand the practical usefulness of the proposed method. This method has one additional hyperparameter i.e the separating epoch. The authors have reported the best separating epoch for all the datasets which is epoch 8 for CIFAR-10 and epoch 20 for CIFAR-100 (Appendix C.2). How is this hyperparameter chosen? Is there a separating epoch number that works across various datasets? This is especially relevant given that that the average gain on most of the datasets with USEFUL is less than 1% with additional cost for training. \n \n [1] Springer, Jacob Mitchell, Vaishnavh Nagarajan, and Aditi Raghunathan. \"Sharpness-Aware Minimization Enhances Feature Quality via Balanced Learning.\" The Twelfth International Conference on Learning Representations.\n\n[2] Shah, Harshay, et al. \"The pitfalls of simplicity bias in neural networks.\" Advances in Neural Information Processing Systems 33 (2020): 9573-9585.\n \n [3] Geirhos, Robert, et al. \"Shortcut learning in deep neural networks.\" Nature Machine Intelligence 2.11 (2020): 665-673.\n \n [4] Kirichenko, Polina, Pavel Izmailov, and Andrew Gordon Wilson. \"Last layer re-training is sufficient for robustness to spurious correlations.\" arXiv preprint arXiv:2204.02937 (2022).\n \n [5] Teney, Damien, et al. \"Evading the simplicity bias: Training a diverse set of models discovers solutions with superior ood generalization.\" Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2022.\n\n1. I did not find the separating epoch used for most of the datasets (except CIFAR-10 and CIFAR-100). Could you please point me to that?\n2. What is the takeaway from Figure 1? Please mention the observations regarding the Figure involving the fast and slow learnable features. What kind of examples are usually clustered in fast learnable cluster vs slow learnable cluster,\n\nPlease refer to the Weakness section for remaining questions." }, { "confidence": 4, "rating": 4, "review_id": "Qmi8KMhqJg", "review_text": "- Proves for a 2-layer CNN with fixed second layer weighsts trained on a toy dataset, SAM learns slow-learnable and fast-learnable features more uniformly in the early epochs compared to SGD\n- Based on this analysis, proposes a simple clustering-based upsampling strategy for reducing simplicity bias / excessive reliance on fast-learnable features. The results show that this improves in-distribution generalization of standard small-scale image classification tasks.\n\n- Simple easy-to-implement method that uses SAM and upsampling to improve in-distribution generalization\n- The method is well justified with theoretical analysis comparing SAM and SGD on a toy data distribution. This analysis indicates that SAM is less sensitive to simplicity bias.\n\n- No baselines. There are several papers now that try to reduce simplicity bias in order to improve performance:\n - https://arxiv.org/abs/2105.05612\n - https://arxiv.org/abs/2301.13293\n - https://arxiv.org/abs/2107.09044 (does not focus on simplicity bias explicitly, but similar to method proposed in paper)\n - simpler baselines: there are several papers that propose “example difficulty” metrics (https://arxiv.org/abs/2106.09647). How well do this correlate with the clusters found in your method? If you just train on the k examples with the highest difficulty scores (per class), does this fare worse than the proposed method?\n- Limited novelty due to findings in [64] (Sharpness-aware minimization enhances feature quality via balanced learning). This paper also shows that SAM improves feature diversity (on real datasets + backed up with analysis on a toy dataset) and improves performance on transfer-learning tasks.\n- Lacking discussion about when this method would fail. I can imagine two scenarios where the method would not work:\n 1. Most training examples have one or more slow-learnable features. In this case, the clustering approach would “remove” most of the points in the dataset, and train on very points for multiple epochs. This could result in overfitting and performance that is worse than training. There’s an implicit assumption that there is some sort of one-to-one relation between examples and features. In the case where all examples contain an “easy” (e.g. patch) and a “hard” feature (e.g. CIFAR), would this method improve performance over SGD? \n 2. In noisy datasets, low-quality examples or mislabeled examples would require more time to learn, and this method would cluster them and train on them for longer. That is, it would group examples that are “high-quality” and hard-to-learn with “low-quality” points. In this case, would the proposed method improve performance over SGD? \n- “SB of SGD has been long conjectured to be the reason for the superior generalization performance of overparameterized models, by providing capacity control or implicit regularization” This incorrectly cites https://arxiv.org/abs/2006.07710v2, which shows that too much simplicity bias can lead to robustness and in-distribution generalization issues.\n- Unfair evaluation. The experiments compare SAM+TA augmentation and SAM+USEFUL+TA to SGD (no TA). I think there should be two plots, complaring {SGD, SAM, SAM+USEFUL} w/ and w/o TA.\n- Experiments on larger datasets. The image classification used here are fairly small-scale. I would like to how well this method scales to ImageNet-scale datasets (TinyImageNet is not a good proxy..)\n- Writing is repetitive at times, especially the theory section (3.3)\n\n- How specific is the analysis to the toy distribution setting in which there is just 1 slow-learnable and 1 fast-learnable feature? How do things change if the number of slow-learnable features >> number of fast-learnable features?\n- Why is max alignment between weight vector and ground-truth feature v_e the right way to evaluate the feature’s contribution to the model? Isn’t it hypothetically possible that SAM solutions rely more on the simpler feature if more weight vectors rely on v_e instead of v_d, even when max-alignment with slow feature is higher for SAM? Some discussion connecting this metric to “feature reliance” vis-a-vis model outputs would be great." }, { "confidence": 3, "rating": 4, "review_id": "s4pdozXMf4", "review_text": "This paper proposes an algorithm for changing the distribution of training data to improve the generalization of the model on origin data distribution. The paper is inspired by Sharpness Aware Minimization, which aims at finding a flat minimum meaning that it has a good generalization capability. This paper divides features into two categories: fast-learnable features and slow-learnable features and derives some observations like \"SGD and SAM only learn fast-learnable or easy features early in training\" and \"SAM learns slow-learnable and fast-learnable features at a more uniform speed\". The authors propose the method dubbed as USEFUL to train the model on some slow-learnable features repeatedly. The experiments show the effectiveness of USEFUL on CIFAR10 and CIFAR100 datasets.\n\n- The paper is well-written and easy to follow.\n- The paper has a theoretical analysis to analyze the learning progress and derive the proposed method.\n- The experiments are abundant and comprehensive.\n\nThere are some questions based on the presentation of this paper, I will not hesitate to improve my score if the following question are solved.\n- Difference between this paper and methods for long-tailed data distribution or measuring the difficulty of learning examples. Algorithms for long-tailed data distribution are usually based on resampling training data or reweighing loss value. The proposed USEFUL is similar to the resampling methods except that USEFUL focuses on the features that are hard/slow to learn. Some references for understanding: [Shi, Jiang-Xin, et al. \"How re-sampling helps for long-tail learning?.\" Advances in Neural Information Processing Systems 36 (2023).](https://arxiv.org/pdf/2310.18236), [Shrivastava, Abhinav, Abhinav Gupta, and Ross Girshick. \"Training region-based object detectors with online hard example mining.\" Proceedings of the IEEE conference on computer vision and pattern recognition. 2016.](https://arxiv.org/pdf/1604.03540v1) and some references based on it, [A Re-Balancing Strategy for Class-Imbalanced Classification Based on Instance Difficulty](https://openaccess.thecvf.com/content/CVPR2022/papers/Yu_A_Re-Balancing_Strategy_for_Class-Imbalanced_Classification_Based_on_Instance_Difficulty_CVPR_2022_paper.pdf), [Active Teacher for Semi-Supervised Object Detection](https://openaccess.thecvf.com/content/CVPR2022/papers/Mi_Active_Teacher_for_Semi-Supervised_Object_Detection_CVPR_2022_paper.pdf), I believe a discussion of these references in paper should be helpful.\n- The relation between the proposed USEFUL and SAM? It seems like the motivation of USEFUL is changing the data distribution to get a flat minimum like SAM. But the results in Appendix D.2, *i.e.*, 53.8 for SGD 41.8 for SGD+USEFUL 12.4 for SAM in Table 1($\\lambda_{max})$, do not show effectiveness compared with SAM. It could show the effectiveness on SGD but it's far from being comparable to SAM. \nSome small questions:\n- What's the exact formulation of the Data distribution?\n- What's the \"patch\" meaning in Definition 3.1? Is that the same as the patch in ViT or the channel of the image? It's a little confusing.\n- The experiments mainly focus on traditional architecture, e.g., n-Layer CNN, ResNet. More experiments on popular models and big datasets, e.g., Transformer ImageNet-1k, would be better.\n\nSee Weakness." } ]
yxjWAJzUyV
REBEL: Reinforcement Learning via Regressing Relative Rewards
While originally developed for continuous control problems, Proximal Policy Optimization (PPO) has emerged as the work-horse of a variety of reinforcement learning (RL) applications, including the fine-tuning of generative models. Unfortunately, PPO requires multiple heuristics to enable stable convergence (e.g. value networks, clipping), and is notorious for its sensitivity to the precise implementation of these components. In response, we take a step back and ask what a *minimalist* RL algorithm for the era of generative models would look like. We propose REBEL, an algorithm that cleanly reduces the problem of policy optimization to regressing the *relative reward* between two completions to a prompt in terms of the policy, enabling strikingly lightweight implementation. In theory, we prove that fundamental RL algorithms like Natural Policy Gradient can be seen as variants of REBEL, which allows us to match the strongest known theoretical guarantees in terms of convergence and sample complexity in the RL literature. REBEL can also cleanly incorporate offline data and be extended to handle the intransitive preferences we frequently see in practice. Empirically, we find that REBEL provides a unified approach to language modeling and image generation with stronger or similar performance as PPO and DPO, all while being simpler to implement and more computationally efficient than PPO. When fine-tuning Llama-3-8B-Instruct, REBEL achieves strong performance in AlpacaEval 2.0, MT-Bench, and Open LLM Leaderboard. Implementation of REBEL can be found at <https://github.com/ZhaolinGao/REBEL>, and models trained by REBEL can be found at <https://huggingface.co/Cornell-AGI>.
https://openreview.net/pdf/593bf12fbcf5e521841f522017b13aceee20e6e5.pdf
[ { "confidence": 3, "rating": 7, "review_id": "m7UbyEuYlg", "review_text": "This paper reduces the complex policy optimization procedure of alignment to a simple regression objective, using the relation between optimal policy and reward. The paper conduct detailed theoretical analysis in revealing the relation between the proposed algorithm *REBEL* and *NPG/MD*. Comprehensive experiments in both text and image generation exhibit the effectiveness of *REBEL*.\n\n1. This paper studies simplified version of policy optimization in RLHF (compared to PPO), which is a research topic of interest.\n2. The theoretical analysis of *REBEL* is detailed and insightful.\n3. The presentation of this paper is logically clear and has good readability.\n4. The experiments in this paper are comprehensive, and the experimental results are well presented.\n\n1. The statement \"REBEL ... be extended to handle intransistive preferences ....\" in the abstract is not adequately presented in the main content of the paper. As the major influence brought by intransistive preferences is the degradation of reward score accuracy, which is not addressed by this paper.\n2. I would suggest the authors to summarize the limitations of the proposed method in a separate \"Limitations\" section.\n\nnone" }, { "confidence": 4, "rating": 5, "review_id": "I2NT1MRXOr", "review_text": "This paper proposes the REBEL algorithm that reduces policy optimization to iteratively solving squared loss regression problems on the difference in rewards between trajectories, based on DPO's analysis. The paper transforms the resulting equation for r(x, y) presented in DPO to a regression loss function, and avoids the intractable calculation of Z(x) by calculating the loss based on a pair of samples from the same input prompt x, i.e., (x, y) and (x, y'). One of the goals for REBEL is to serve as a simple and lightweight RL algorithm that eliminates the need for complex components like value functions and clipping heuristics used in PPO. The authors provide a theoretical analysis showing that Natural Policy Gradient can be seen as a special case of REBEL under some assumptions. The authors conduct two kinds of empirical analysis including language modeling and image generation tasks to demonstrate the performance of REBEL.\n\n- Originality:\n - This paper presents a new angle by transforming the analysis of the reward function presented in the DPO paper into a reward regression loss, leading to the proposed REBEL algorithm. \n - The authors make connections between REBEL and existing RL methods like NPG considering some assumptions, showing that these algorithms can be seen as special cases or approximations of REBEL under certain conditions. \n\n- Quality:\n - The paper provides a thorough theoretical analysis comparing REBEL with existing RL approaches. \n\n- Clarity: \n - The paper is well-written and easy to understand, with a clear logical flow from motivation to theoretical analysis to empirical validation. The authors do an good job of explaining the intuition behind REBEL and highlighting its connections to prior work.\n\n- Significance:\n - The paper tackles the important problem of developing simpler and more efficient RL algorithms that can scale to large-scale generative model fine-tuning.\n\n1. Insufficient experimental validation and limited baseline comparisons:\n- While the paper presents empirical results on language modeling and image generation tasks, the experimental validation of REBEL could be more comprehensive. The authors should consider including a wider range of benchmarks and datasets to demonstrate the generality and robustness of their approach.\n- The comparison with baseline algorithms like PPO and DPO is somewhat limited. The authors should provide more details on the hyperparameter settings and training procedures for the baselines to ensure a fair comparison. Moreover, the poor performance of DPO compared to PPO in the experiments raises questions about the implementation or hyperparameter choices.\n- The authors claim that REBEL matches the strongest known theoretical guarantees in terms of convergence and sample complexity. However, the experiments only compare performance at a specific epoch without demonstrating improved sample efficiency. Convergence plots showing the performance of REBEL and baselines over the course of training would provide a clearer picture of the sample efficiency and convergence properties.\n\n2. Lack of support for certain claims and limited exploration of key aspects:\n- The paper makes several claims regarding the advantages of REBEL, such as its ability to handle intransitive preferences, incorporate offline datasets, and apply to deterministic MDPs. However, there is a lack of corresponding experimental evidence or theoretical analysis to substantiate these claims.\n- The relationship between the regressor's performance and the quality of the dataset used for training is not explored in depth. Insights or experiments that investigate how dataset quality and diversity affect the regressor's ability to capture an improved policy would strengthen the paper.\n- The choice of base distribution \\mu is mentioned as a determining factor for whether REBEL is hybrid or fully online. However, the paper does not provide experimental results comparing different forms of \\mu across various tasks or practical guidelines for choosing \\mu in real-world applications.\n\n3. Inconsistencies and potential conflicts with previous statements:\n- The authors mention that critic-based variance reduction might be necessary for high-variance trajectory-level rewards in stochastic MDPs, which seems to contradict the criticism of PPO's complexity in the introductory section. The lack of experimental support for REBEL's performance in stochastic MDPs is a significant limitation, and the authors should provide preliminary results or theoretical insights to support their claims.\n\n1. Sample efficiency and convergence guarantees:\n- The authors claim that REBEL matches the strongest known theoretical guarantees in terms of convergence and sample complexity. However, the experiments only compare performance at a specific epoch without demonstrating improved sample efficiency. Can the authors provide experimental results that support their claim of improved sample efficiency compared to other algorithms?\n- It would be helpful to see convergence plots that show the performance of REBEL and baseline algorithms over the course of training, rather than just at a selected epoch. This would provide a clearer picture of the sample efficiency and convergence properties of REBEL.\n2. Relationship between regressor performance and dataset quality:\n- The authors state that a regressor that can predict the difference in rewards between trajectories implicitly captures an improved policy. Is the performance of this regressor dependent on the quality of the dataset used for training? How does the quality of the dataset affect the regressor's ability to capture an improved policy?\n- Can the authors provide insights or experiments that explore the relationship between dataset quality and the effectiveness of REBEL?\n3. Applicability to deterministic MDPs:\n- The authors mention that REBEL can be applied to any deterministic MDP where the initial state is x and the trajectory y consists of a sequence of actions. Is there any experimental or theoretical support for this claim?\n- It would strengthen the paper if the authors could provide empirical results or theoretical analysis that demonstrates the effectiveness of REBEL in deterministic MDPs beyond the bandit formulation.\n4. Choice of base distribution \\mu:\n- The authors state that the choice of base distribution \\mu determines whether REBEL is hybrid or fully online. Can they provide experimental results that compare different forms of \\mu across various types of tasks? What are the practical guidelines for choosing \\mu in real-world applications?\n- Insights into the impact of different choices of \\mu on the performance and behavior of REBEL would be valuable for practitioners looking to apply this algorithm.\n5. Stochastic MDPs and the need for critic-based variance reduction:\n- The authors leave the experimental validation of REBEL in stochastic MDPs for future work but mention that trajectory-level rewards can be high-variance, potentially requiring critic-based variance reduction. In what practical situations would the transition dynamics be stochastic? If critic-based variance reduction is needed, how does this align with the introductory section's criticism of PPO's complexity?\n- The lack of experimental support for REBEL's performance in stochastic MDPs is a significant limitation. Can the authors provide any preliminary results or theoretical insights that support their claims about REBEL's applicability to stochastic environments?\n6. Performance comparison with baselines:\n- In the experiments conducted by the authors, DPO performs significantly worse than PPO, especially in Table 1, where DPO is inferior in every case. Can the authors provide an explanation for this discrepancy? Is it due to differences in implementation or hyperparameter settings?\n- In Figure 3, the comparison between PPO and REBEL is made at an intermediate checkpoint where REBEL observes a higher reward under the reward model. Is it possible that PPO has already overfit at this selected epoch? How was this specific epoch number chosen for REBEL? What would the comparison look like if the best-performing epoch for each algorithm were considered? Additionally, why is the comparison limited to only PPO? It would be informative to include other state-of-the-art RL algorithms in the comparison to better understand the relative performance of REBEL." }, { "confidence": 3, "rating": 7, "review_id": "4Em20XXbwf", "review_text": "This work presents REBEL, a minimalist reinforcement learning algorithm that does policy optimization by solving a sequence of regression problems using relative rewards as targets. Theoretical analysis shows that Natural Policy Gradient (NPG) is a variant of REBEL, and thus theoretical guarantees for NPG can be applied to REBEL. Experimental results show that REBEL matches or outperforms existing baselines, most notably PPO and RLOO, on multiple tasks.\n\n- The paper is well-organized and technically sound. The general flow of the paper is smooth and proposed methods are explained adequately. The paper has an appropriate number of citations and properly details existing work in the related work section. \n- The method is simple to implement and has little engineering overhead. Given the minimalist implementation, the results are impressive, surpassing even PPO, which typically requires significant engineering.\n\n- There are no significant weaknesses in this work, barring some clarifying details. \n- I believe that at least a brief section on related work should be included in the main paper, the in-depth one can be deferred to the appendix. In terms of space, I personally do not think Section 2.2 adds much value to the main paper.\n\n- The reward model becomes increasingly off-distribution as the policy is updated. Although it is standard practice to keep reward models fixed even with iterative methods, prior works generally use it to generate preference labels between pairs of outputs. Since this work uses the difference of scores as the regression target, the off-distribution reward scores might have a greater impact here. Concisely, how significant a problem is reward model over-optimization [1] for REBEL?\n- It would be interesting to see and understand the differences between reward-weighted regression baseline (RWR) and REBEL as they have some close connections. \n- Is there an optimal choice of $\\mu$ ? What are the intuitive differences between using the $\\mu= \\pi_{ref}$ and $\\mu= \\pi_{t}$ ? As the policy improves, samples $y,y’ \\sim \\pi_{t}$ are in the high reward region, and it can be difficult to separate them since these might be off-distribution for the reward model. Given these constraints of the reward model, there might be better choices of $\\mu$ that allow for better prediction of score differences. It would be interesting to see an ablation study on this, or a well-reasoned answer that explains the tradeoffs between different choices of $\\mu$. \n- Why are datasets not aggregated? Instead, only the most recently collected dataset is used for training. \n\n[1] : Gao, L., Schulman, J., & Hilton, J. (2023, July). Scaling laws for reward model overoptimization. In International Conference on Machine Learning (pp. 10835-10866). PMLR." }, { "confidence": 3, "rating": 8, "review_id": "g1x1J8JQyO", "review_text": "The authors present REBEL, a method for solving contextual bandit problems (such as the alignment of language models) via regressing relative rewards. They first derive their objective by demonstrating that the use of paired responses means that you can get rid of the partition function, which is impossible to estimate. \n\nThey then connect their method to previous methods in RL including detailing, but not . They demonstrate that under strong assumptions REBEL is equivalent to mirror descent, and that under assumptions of coverage by the reference policy, that REBEL produces returns close to an optimal policy. \n\nFinally the authors run experiments on summarisation, general chat and image alignment, demonstrating their method compares favourably to other methods.\n\n* The idea of using relative rewards to remove the partition function is a nice and simple idea\n* The theoretical connections of their method to prior methods grounds their work nicely in existing RL approaches. \n* The empirical results seem to demonstrate their method is competitive or better than other approaches. \n* REBEL compares favourably in terms of runtime and memory usage with other, similarly performing methods. \n\nOverall the theoretical and empirical examinations of their method seems very thorough.\n\nSee questions\n\n* Do the authors have any idea why REBEL seems to have a slightly higher KL than the other methods?\n* Although in image alignment REBEL seems to do similarly to PPO, it also has higher variance. Do you know why that might be?\n* Are the results for the 6.8B model significant? It seems as though REBEL produces very similar performance to e.g. PPO. For the smaller models the separation seems larger, is there a reason why the separation in performance between REBEL and other methods is bigger for smaller models?\n* What are the error bars in Table 1? Is that standard deviation?" } ]
yxOrSmS5wR
AV-Cloud: Spatial Audio Rendering Through Audio-Visual Cloud Splatting
We propose a novel approach for rendering high-quality spatial audio for 3D scenes that is in synchrony with the visual stream but does not rely or explicitly conditioned on the visual rendering. We demonstrate that such an approach enables the experience of immersive virtual tourism - performing a real-time dynamic navigation within the scene, experiencing both audio and visual content. Current audio-visual rendering approaches typically rely on visual cues, such as images, and thus visual artifacts could cause inconsistency in the audio quality. Furthermore, when such approaches are incorporated with visual rendering, audio generation at each viewpoint occurs after the rendering of the image of the viewpoint and thus could lead to audio lag that affects the integration of audio and visual streams. Our proposed approach, AV-Cloud, overcomes these challenges by learning the representation of the audio-visual scene based on a set of sparse AV anchor points, that constitute the Audio-Visual Cloud, and are derived from the camera calibration. The Audio-Visual Cloud serves as an audio-visual representation from which the generation of spatial audio for arbitrary listener location can be generated. In particular, we propose a novel module Audio-Visual Cloud Splatting which decodes AV anchor points into a spatial audio transfer function for the arbitrary viewpoint of the target listener. This function, applied through the Spatial Audio Render Head module, transforms monaural input into viewpoint-specific spatial audio. As a result, AV-Cloud efficiently renders the spatial audio aligned with any visual viewpoint and eliminates the need for pre-rendered images. We show that AV-Cloud surpasses current state-of-the-art accuracy on audio reconstruction, perceptive quality, and acoustic effects on two real-world datasets. AV-Cloud also outperforms previous methods when tested on scenes "in the wild".
https://openreview.net/pdf/ddf8493e6466b57d0cba1ab7e5d9b9b4ab8c6d6d.pdf
[ { "confidence": 3, "rating": 6, "review_id": "CzCXW7QJVE", "review_text": "The paper proposes AV-Cloud, a framework for high-quality spatial audio rendering in 3D scenes without relying on visual cues. AV-Cloud addresses issues in current audio-visual rendering methods, such as audio lag and dependence on visual rendering quality, by introducing Audio-Visual Anchors and the Audio-Visual Cloud Splatting module. These components facilitate the generation of viewpoint-specific spatial audio synchronized with visual content. The method demonstrates superior performance on multiple benchmarks, outperforming existing baselines in audio reconstruction accuracy, perceptual quality, and acoustic effects.\n\n1. The concept of using Audio-Visual Anchors and Cloud Splatting to decouple audio rendering from visual rendering is interesting.\n2. The paper demonstrates comprehensive experimentation and robust evaluation across multiple benchmarks.\n3. The paper is well-structured and the presentation of the framework is clear. The figures and supplement examples help the readers better understand.\n4. The proposed method addresses critical issues in real-time audio-visual rendering.\n\n1. The mathematical formulation of the Audio-Visual Cloud Splatting module could be more detailed. For instance, Equation (2) introduces the softmax function applied to the relative vectors and visual features, but the reason behind this specific formulation and its implications are not sufficiently explained. Clarifying how the weights $a_{ki}$ are computed and how they influence the final output would enhance understanding.\n2. The technical derivation of the Spatial Audio Render Head (SARH) lacks depth. Specifically, the process described in Equations (4) and (5), where the mixture mask $m_m$ and the difference mask $m_d$ are used to compute the left and right channel outputs, is not fully elaborated. The significance of these masks and their impact on the final audio quality are not clearly discussed. Additionally, the role and impact of the convolution modules within the residual structure (Figure 3) are not sufficiently explained.\n3. While the method shows strong performance on benchmarks and some real-world examples, the provided examples are too idealized and lack challenging elements like interfering sound (e.g., crowd noise). I think the robustness of AV-Cloud in more complex and noisy real-world environments should also be validated.\n\nSee Weaknesses." }, { "confidence": 1, "rating": 7, "review_id": "JYc1vAePwV", "review_text": "A novel approach for rendering high-quality spatial audio in 3D scenes, called AV-Cloud, is proposed. This method synchronizes with the visual stream without relying on or being explicitly conditioned by visual rendering, enabling immersive virtual tourism through real-time dynamic navigation of both audio and visual content. Unlike current audio-visual rendering methods that depend on visual cues and may suffer from visual artifacts causing audio inconsistencies, AV-Cloud overcomes these issues. It uses a set of sparse AV anchor points, forming an Audio-Visual Cloud derived from camera calibration, to represent the audio-visual scene. The Audio-Visual Cloud allows for the generation of spatial audio for any listener location. A novel module, Audio-Visual Cloud Splatting, decodes these AV anchor points into a spatial audio transfer function for the listener’s viewpoint, which is then applied by the Spatial Audio Render Head module to transform monaural input into viewpoint-specific spatial audio. This approach eliminates the need for pre-rendered images and efficiently aligns spatial audio with any visual viewpoint. The results are satisfying.\n\n1. The AV anchors strategy seems to be interesting and effective for audio-visual scene representation. The Audio-Visual Cloud Splatting is novel for AV tasks but more likely to be a Q-former.\n2. The experiment results are good and ablations are clear.\n\nAs I mentioned in the strengths, the Audio-Visual Cloud Splatting seems to be a Q-former like module.\n\nWhat is the difference between the AVCS and Q-former?" }, { "confidence": 4, "rating": 5, "review_id": "LNVoL4pp0P", "review_text": "The paper explores the problem of generating 3D audiovisual scenes – that is, generating 3D scenes with spatial audio. The proposed approach, AV Cloud, uses anchor points obtained from Structure-from-Motion (SfM) points. The anchors are then used with an AV Cloud splatting module which decodes the visuals and the audio. Experiments are done on RWAVS and Replay-NVAS with comparisons done with several prior works.\n\n– 3d audiovisual scene generation is a really interesting problem to solve. WHile there is considerable literature on visual scene generation, generating 3d visual scene is an interesting problem with real-world applications. \n\n– The model claims to be able to generate the audio and the visuals in parallel. Essentially unlike prior work it decouples the generation of two modalities by not using the generated visuals for generating the audio. \n\n– On objective metrics, the paper claims to make good improvements\n\n---- \nincreased score after rebuttal\n\n– The paper is a bit difficult to follow – especially the key part of AudioVisual anchor points. \n\n– First, a short primer on SfM is desirable, even if it is in Appendix. More importantly though, it is not clear why it makes sense to use SfM points and clustering on top of them to model AV anchor points and generation of spatial points. Why does it make sense to use SfM points or anchors derived from them as the starting point for AV generation ? What relation the anchors have with audio which motivates the fact that these anchors can be used for audio generation ? \n\n– Second, the details of AV anchor points are fuzzy. The visuals are used for SfM points which are then clustered to get the anchors. Where is the audio into picture here ? Are these anchors visual only ? If so, why are we calling it AV Anchors ? \n\n– In prior works, for example AV-Nerf, there is an an explicit AV-Mapper which learns the audio visual relations through which the spatial audio generatio happens. Here Visual2Audio splatting transformer is expected to model that ? \n\n– For the subjective tests, it would be good to actually get proper subjective ratings on the generated spatial audio. The current preference numbers are not very informative. Getting the spatial audio rated with respect to their quality and spatial characteristics would be much more meaningful. \n\n– Since NAF, INRAS and other works are considered here - I think it would be good to reference NACF ([R1]) below. NACF specifically focuses on using visuals and is ideal for comparison. \n\n[R1] Neural Acoustic Context Field: Rendering Realistic Room Impulse Response With Neural Fields\n\nPlease address the questions below." } ]
ywEQkCmImh
Towards Multi-Domain Learning for Generalizable Video Anomaly Detection
Most of the existing Video Anomaly Detection (VAD) studies have been conducted within single-domain learning, where training and evaluation are performed on a single dataset. However, the criteria for abnormal events differ across VAD datasets, making it problematic to apply a single-domain model to other domains. In this paper, we propose a new task called Multi-Domain learning forVAD (MDVAD) to explore various real-world abnormal events using multiple datasets for a general model. MDVAD involves training on datasets from multiple domains simultaneously, and we experimentally observe that Abnormal Conflicts between domains hinder learning and generalization. The task aims to address two key objectives: (i) better distinguishing between general normal and abnormal events across multiple domains, and (ii) being aware of ambiguous abnormal conflicts. This paper is the first to tackle abnormal conflict issue and introduces a new benchmark, baselines, and evaluation protocols for MDVAD. As baselines, we propose a framework with Null(Angular)-Multiple Instance Learning and an Abnormal Conflict classifier. Through experiments on a MDVAD benchmark composed of six VAD datasets and using four different evaluation protocols, we reveal abnormal conflicts and demonstrate that the proposed baseline effectively handles these conflicts, showing robustness and adaptability across multiple domains.
https://openreview.net/pdf/e8a9752b43978f5dd7a7f88fc83f109cdef34692.pdf
[ { "confidence": 4, "rating": 5, "review_id": "Lw3aonvTQw", "review_text": "This work proposes a new task named Multi-Domain Learning Video Anomaly Detection, which aims to learn a general VAD model across domains. The work finds that abnormal conflict is a critical challenge in the task. Then, the work establishes a new benchmark, designs an effective baseline and conducts extensive experiments to investigate this challenge. The results shown on the benchmark demonstrate that the abnormal conflict is alleviated.\n\n1. The work proposes a new task, which is interesting. \n2. The work establishes a new benchmark to evaluate the new task. \n3. The motivation of the proposed baseline, i.e., abnormal conflict, is clear and makes sense.\n\nI have some concerns about the proposed method, and I think more comparison experiments are needed to demonstrate the effectiveness. Despite this, I think the abnormal conflict issue is interesting, thus I am willing to raise my rating if my major concerns are addressed. My concerns are as follows:\n\n1. Why the proposed Abnormal Conflict (AC) classifer can address the abnormal conflict problem? Why the label is determined by the discrepancy in Eq. (6)? It seems that there are some mistakes in the formula (inconsistent with that in Fig. 2). \n2. I would like to see the results of more baselines, in addition to MIL, Null-MIL and NullAng-MIL. \n3. More detailed discussions about related works are needed, e.g., virual video anomaly detection datasets [1] and related techniques utilizing virtual datasets [2]. \n\n[1] Ubnormal: New benchmark for supervised open-set video anomaly detection, CVPR 2022\n\n[2] Generating Anomalies for Video Anomaly Detection with Prompt-based Feature Mapping, CVPR 2023\n\nSee the Weakness part." }, { "confidence": 5, "rating": 5, "review_id": "XslE8Tdz6x", "review_text": "In this paper, authors proposed a new task called Multiple Domain VAD (MDVAD), along with a benchmark and new evaluation protocols. Authors' goal is to construct a general VAD model by conducting multi-domain learning while recognizing abnormal conflicts and exploring representations of general normality and abnormality. Authors introduced a baseline for MDVAD and proposed a new framework with multi-head to mitigate abnormal conflicts and proposed Null-Multiple Instance Learning (Null-MIL) and NullAngular-MIL (NullAng-MIL) losses for multi-domain training. Additionally authors suggested an Abnormal Conflict (AC) Classifier to explore general features while being aware of abnormal conflicts. Authors analyzed the primary issues of MDVAD and proposed a baseline for this new task.\n\n1. According to the analysis, authors believed that the abnormal conflict and the scene discrepancy are the two main issues and designed a framework with multi-head to deal with these problems. \n\n2. Null-MIL and NullAng-MIL methods are designed for multi-domain learning, and an AC classifier is proposed for learning general features while abnormal conflicts exists.\n\n3. Authors provided sufficient experiment results for this task and create a new baseline.\n\n1. The proposed framework with multi-head for multi-domain seems not flexible enough during the domain changes, such as adding a new dataset with extra abnormal conflicts. And for the abnormal conflicts, will the proposed method performs better compared to make anomaly categories classifications for all anomaly events type of all domains?\n2. In my opinion, traditional WS-VAD methods are designed to detect abnormal events in single domain without abnormal conflicts, and when abnormal conflicts exists, it will be better to use other paradigms such as temporal action localization or video grounding. And for the current WS-VAD datasets, the annotations are video-level, or even without category information, which is too weak for higher level anomaly detection. Training model with the current MDVAD paradigm is likely to not achieve good results.\n3. Maybe using visual-language model with multimodal alignment can deal with the above issues? These models contain more knowledges for more event categories and higher generalization ability, which are likely to have the ability to individually detect conflicting anomalies. Compared to multi-head regression, is VL alignment a better approach for MDVAD task?\n\nMy main questions are shown in the weakness." }, { "confidence": 4, "rating": 5, "review_id": "jQCqVJbfu2", "review_text": "The manuscript addresses the limitations of existing Video Anomaly Detection (VAD) models that are confined to single-domain learning. The primary contribution of the paper is the introduction of a new task called Multi-Domain Learning for VAD (MDVAD), which aims to develop a general model capable of identifying abnormal events across multiple domains. The manuscript conducts experiments using the MDVAD benchmark and demonstrates the limitations of traditional multi-domain learning. It shows the effectiveness of the proposed baselines in handling abnormal conflicts and achieving robust performance across multiple domains.\n\n1.The manuscript proposes a new task, Multiple Domain Video Anomaly Detection (MDVAD), which solves the problem that the existing model is limited to a single domain and provides a new idea for the development of domain-generalized models.\n2.The MDVAD method proposes domain-specific multiple head mechanism and Null-Multiple Instance Learning Method (Null-MIL), which effectively solves the problem of anomaly conflict between different domains.\n3. The MDVAD method constructs a new benchmark containing six representative VAD datasets, which fills the gap of the lack of unified evaluation standard in multi-domain learning tasks.\n4. The MDVAD method designs four evaluation protocols (held-in, leave-one-out, low-shot domain adaptation, and full fine-tuning) to systematically evaluate the generalization ability of the model.\n\n1. MDVAD introduces the domain-specific multi-head mechanism and the Null-MIL method, which increases the complexity and computational cost of the model, and may place higher demands on the computational resources in practical applications.\n2. The multi-domain learning task itself is difficult to train, and with the proposed method further increasing the complexity of training, MDVAD may require longer training time and higher technical requirements.\n3. Although the theoretical background and analysis are provided, the theoretical basis and derivation process of some of the methods of MDVAD are slightly weak and need to be further explored and verified in depth. Part of the theoretical analysis is based on specific assumptions, and these assumptions may not be fully valid in practical applications, affecting the applicability of the theoretical analysis.\n4. Although new benchmarks and assessment protocols are proposed, MDVAD lacks comparative experiments with other state-of-the-art methods, making it difficult to objectively assess the relative advantages of the proposed methods.\n\n1. What is the training difficulty of MDVAD? The introduction of domain-specific multi-head mechanism and Null-MIL method greatly increases the complexity and computational cost of the model, can it meet the real-time requirements in practical applications?\n2. Have the MDVAD and evaluation protocols been subjected to comparative experiments with other state-of-the-art methods in order to objectively assess the relative advantages of the proposed methods?" }, { "confidence": 4, "rating": 4, "review_id": "JlL1NjZd91", "review_text": "This paper proposes a new task called MDVAD, the goal of which is to effectively learn from multiple domains with different data distributions and definitions of abnormality without confusion, resulting in a general VAD model. To achieve this, the authors expand the traditional single-head framework to multiple-head framework for learning different knowledge and design an AC classifier to handle abnormal conflicts. The experimental results prove the effectiveness of the proposed method.\n\n1. This paper focuses on the problem of learning a generalizable VAD model, which is an important task.\n2. The experiments conducted by the author are relatively compr\n\n1. This paper proposes a new task called MDVAD to achieve generalizable VAD by resolving conflicts in anomaly definitions. However, for any VAD application, the definition of normal or abnormal events should be explicitly determined according to the scenario requirements, rather than simply combining multiple datasets and resolving the abnormal conflicts. I find it difficult to understand under what practical scenario a VAD model trained using multiple datasets with abnormal conflicts is needed.\n2. The writing of this paper is not clear enough, where some necessary training and inference details are missed. For example, the normal head training mentioned in NullAng-MIL is confusing.\n3. This paper lacks a detailed description of the experimental setup. For example, if an anomalous event is determined to be a conflict, how should the model handle such an event?\n\n1. (referring to weakness 1) In practical applications, what kind of scenarios conform to the task settings of MDVAD proposed by the authors?\n2. (referring to weakness 3) During test phase, is it necessary to know which dataset the sample comes from? If a certain anomalous event is found to have conflicts in different datasets, how should it be handled? Do the multi-dataset evaluation and the test procedures for other methods use the same test data?" } ]
yvUHnBkCzd
Personalized Federated Learning with Mixture of Models for Adaptive Prediction and Model Fine-Tuning
Federated learning is renowned for its efficacy in distributed model training, ensuring that users, called clients, retain data privacy by not disclosing their data to the central server that orchestrates collaborations. Most previous work on federated learning assumes that clients possess static batches of training data. However, clients may also need to make real-time predictions on streaming data in non-stationary environments. In such dynamic environments, employing pre-trained models may be inefficient, as they struggle to adapt to the constantly evolving data streams. To address this challenge, clients can fine-tune models online, leveraging their observed data to enhance performance. Despite the potential benefits of client participation in federated online model fine-tuning, existing analyses have not conclusively demonstrated its superiority over local model fine-tuning. To bridge this gap, the present paper develops a novel personalized federated learning algorithm, wherein each client constructs a personalized model by combining a locally fine-tuned model with multiple federated models learned by the server over time. Theoretical analysis and experiments on real datasets corroborate the effectiveness of this approach for real-time predictions and federated model fine-tuning.
https://openreview.net/pdf/1f67e6c96793fc968860e4ecdc67eeb800a1dc2f.pdf
[ { "confidence": 5, "rating": 4, "review_id": "iV0gcbweRE", "review_text": "This paper introduces a personalized federated learning algorithm to address the challenges of real-time predictions in non-stationary environments. Clients fine-tune models online, combining their locally fine-tuned models with multiple federated models learned over time. This approach ensures efficient adaptation to evolving data streams, with theoretical analysis and experiments on real datasets demonstrating its effectiveness.\n\n- The proposed algorithm effectively addresses the challenge of making real-time predictions in non-stationary environments by allowing clients to fine-tune models online, ensuring continuous adaptation to evolving data streams.\n- By combining locally fine-tuned models with multiple federated models, the approach enhances personalization and leverages the strengths of both local and federated learning, resulting in improved performance.\n- The paper provides a solid theoretical analysis alongside experimental validation on real datasets, demonstrating the practical effectiveness and robustness of the proposed algorithm in real-world scenarios.\n\n1. Contributions are suggested to list by items for clear summaries.\n2. The baselines in Table 1 are all before the 2022 year, more latest related methods published in 2023 should be compared.\n3. Fed-POE has limited improvements on Air and FMNIST datasets.\n4. The process of combining locally fine-tuned models with multiple federated models may introduce significant computational overhead for clients, especially those with limited resources.\n5. As the number of clients increases, managing and integrating multiple personalized models can become complex, posing scalability challenges for the proposed algorithm.\n\nNo" }, { "confidence": 4, "rating": 5, "review_id": "TnmL2ZwCZG", "review_text": "This paper proposes a novel personalized federated learning algorithm, Fed-POE, which is designed for adaptive prediction and model fine-tuning in dynamic environments. It addresses the challenge of real-time predictions on streaming data by constructing a personalized model that combines a locally fine-tuned model with multiple federated models. Theoretical analysis and experiments on real datasets demonstrate its effectiveness in achieving sublinear regret bounds and improved online prediction accuracy.\n\n1. The paper proposes a unique ensemble method that dynamically combines local and federated models, which is a novel approach in the field of federated learning.\n\n2. It provides a solid theoretical analysis, demonstrating sublinear regret bounds for convex models.\n\n3. The paper is well organized.\n\n1. Although the presented method is novel, it is simply a combination of the previous personalized federated learning approaches as well as ensemble learning and provides comparably little conceptual originality.  The contribution's main novelty seems to be that integrating results from prior models would be beneficial in mitigating catastrophic forgetting in online federated learning.\n\n2. Experimental results show that the improvement in the accuracy of Fed-POE compared to other methods is not significant, but ensemble learning inevitably increases the computational overhead increase. The paper needs to analyze whether this trade-off is reasonable.\n\n3. The paper needs more experiments to prove the effectiveness of the method, for example, for real-time predictions, the size of the old data replay is crucial, and the authors should design experiments to analyze the effect of batch size b on the experimental results. This paper also needs experiment results on the accuracy over the time step.\n\n1. The method in this paper does not significantly improve accuracy and even has a larger standard deviation. Can you give me more reasons to support your methods?\n\n2. The method is designed with two parts to mitigate catastrophic forgetting (old data replay and integration of multiple old models), The complex model updating process is unreasonable for real-time prediction. Can you design more ablation experiments to analyze these two parts?" }, { "confidence": 4, "rating": 6, "review_id": "X6ubCodu8V", "review_text": "The paper introduces an interesting perspective about the role of ensembles of models in federated learning. The provocative claim is the fact that federated learning is not always better than locally-trained models. This is contextualized in the field of not IID data and time-varying data generating processes. To address this issue the paper introduces from the theoretical point of view how to quantify the regret in federated and locally trained models. In addition it includes an analysis of non-convex models by managing an history of models. The overall impression about the paper is positive even if some points could have been better explored (in particular the part related to not IID data that is somehow the core of the paper).\n\n- The paper introduces a theoretical evaluation of the gain produced by federated models w.r.t. locally trained models. This results show that federated learning is relevant only when models can be considered iid (hence averaging providing better results). This is somehow a know results but I appreciated the theoretical analysis\n- The proposed solution is to combine with a convex mean a locally-trained models with the federated models\n- This is further extended in case of non-convex models by considering an \"history\" of models to be used when needed (i.e., according to the loss)\n\n- The federated models somehow includes the locally-trained model. I would have appreciated a further analysis about the fact that the two \"sides\" of the average model are related each other. \n- The setting in which eta and eta_c scales with T prevents adaptation in the long run (which is somehow the core of the paper). How to deal with that?\n- Federated learning typically takes also into account the complexity of the learning phase (i.e., the amount of info to be transmitted, e.g., the models). This is not quantified here. And this could be also a weak point in the fed-poe algorithm.\n\nSee Weaknesses box" }, { "confidence": 3, "rating": 5, "review_id": "gVs2y7BKsz", "review_text": "This paper introduces Fed-POE, a novel personalized federated learning algorithm tailored for online prediction and model fine-tuning. Fed-POE creates an ensemble by integrating local models with those periodically contributed by the server over time. Theoretical analysis confirms that Fed-POE attains sublinear regret. Empirical results demonstrate that Fed-POE consistently surpasses the performance of both local and federated models across all evaluated datasets, which indicates that Fed-POE effectively leverages the advantages of both local and federated models.\n\n- The technical content of the paper appears to be accurate, although I did not check all the details carefully.\n- This paper is generally well-written and structured clearly.\n- The experiments substantiate the main theoretical analysis, and the proposed algorithm demonstrates superior performance over the baseline methods\n\nMy primary concern is that the assertion the proposed algorithm can effectively harness the combined advantages of federated and local models is not clearly demonstrated within the theoretical bounds. The paper presents two principal theoretical results: Theorem 2 provides the regret upper bound for the proposed algorithm in convex scenarios, while Theorem 3 addresses non-convex cases. Both theorems establish sublinear regret bounds that are consistent with those for federated learning using a straightforward online gradient descent approach. I recommend enhancing the clarity of the proposed method's advantages in the theorems by incorporating assumptions about the data distributions.\n\nSee weaknesses." } ]
yppcLFeZgy
MutaPLM: Protein Language Modeling for Mutation Explanation and Engineering
Studying protein mutations within amino acid sequences holds tremendous significance in life sciences. Protein language models (PLMs) have demonstrated strong capabilities in broad biological applications. However, due to architectural design and lack of supervision, PLMs model mutations implicitly with evolutionary plausibility, which is not satisfactory to serve as explainable and engineerable tools in real-world studies. To address these issues, we present MutaPLM, a unified framework for interpreting and navigating protein mutations with protein language models. MutaPLM introduces a protein *delta* network that captures explicit protein mutation representations within a unified feature space, and a transfer learning pipeline with a chain-of-thought (CoT) strategy to harvest protein mutation knowledge from biomedical texts. We also construct MutaDescribe, the first large-scale protein mutation dataset with rich textual annotations, which provides cross-modal supervision signals. Through comprehensive experiments, we demonstrate that MutaPLM excels at providing human-understandable explanations for mutational effects and prioritizing novel mutations with desirable properties. Our code, model, and data are open-sourced at https://github.com/PharMolix/MutaPLM.
https://openreview.net/pdf/6ba89a23eb0008a9e5fa6007a9fcb9c765216d9f.pdf
[ { "confidence": 5, "rating": 5, "review_id": "n9yelgSQoi", "review_text": "The paper presents MutaPLM, a framework designed to interpret and navigate protein mutations using protein language models. This approach utilizes a protein delta network to capture mutation representations and employs a transfer learning pipeline with a chain-of-thought strategy to leverage knowledge from biomedical texts.\n\n1. This paper attempts to propose a general interpretable model for protein mutations.\n2. This paper compiles a mutation-text multimodal dataset, providing an excellent benchmark for future work.\n3. The code is available. Although I haven't had time to run it yet, I will try to run the code during the rebuttal phase to ensure the reproducibility of the experiments.\n\n1. PLM representations used in this study is the residue-level or protein-level embedding? If the mutation has very few residues, such as a missense mutation, will using protein-level embedding result in h∆ being too small?\n2. Is it possible to provide some more practical mutation-related downstream task benchmark results? For example, predicting changes in protein properties or PPI?\n3. Is it possible to compare the proposed method with the predictive results of embeddings extracted by AF, since the description information of the mutation may already be included in the structural changes predicted by AF before and after the mutation?\n4. I do not deny that this is a good work, but perhaps it is more suitable for the benchmark and dataset track, because its method has limited innovation, and it has not verified its interpretability and performance on actual tasks related to protein properties.\n\nN/A" }, { "confidence": 4, "rating": 6, "review_id": "xzyxWBy0DD", "review_text": "In the paper entitled \"MutaPLM: Protein Language Modeling for Mutation Explanation and Engineering,\" the authors proposed multimodal protein-textual language models for understanding the effect of mutation and performing protein engineering. They also build MutaDescribe, the first large-scale protein mutation dataset with rich textual annotations.\n\n1. The paper is generally well-written and easy to follow.\n2. The authors have constructed the first comprehensive protein mutation dataset enriched with textual annotations. This dataset represents a significant foundation for future research in this field.\n3. The MutaPLM framework introduced in this paper is innovative, particularly in its explicit modeling of mutations and its use of cross-modal transformers for multi-modal feature integration, enhancing its analytical capability.\n4. By integrating large language models, the proposed framework significantly simplifies protein engineering, offering an intuitive tool that could be readily adopted by biologists for advanced research.\n\n1. The paper lacks a comparison with fine-tuned protein language models. Finetuned PLMs (ESM-1, ESM-2) have been validated to be powerful for various downstream tasks. For example, MLAEP(https://www.nature.com/articles/s41467-023-39199-6) and AugmentedESM(https://www.nature.com/articles/s41587-021-01146-5)\n2. The paper did not prove why the textural annotation is necessary. From the ablation study, one can conclude that the labeled information from the textual annotation makes the model powerful. \n3. The paper should add more discussion and experiments on why human-understandable notation is necessary. Human-understandable notations are not more informative compared with a conventional multi-label dataset. Moreover, LLMs may fail to deal with regression tasks, while finetuned PLMs can do better.\n\n1. The statement that \"Protein language models (PLMs) fall short in explaining and engineering protein mutations\" may need reconsideration. 1. Recent studies, such as those involving ESM-1/ESM-IF1, have demonstrated these models' effectiveness in zero-shot engineering tasks. This contradicts the assertion of inherent limitations due to architectural design and lack of supervision. See https://www.nature.com/articles/s41587-023-01763-2 and https://www.science.org/doi/full/10.1126/science.adk8946\n\n2. The manuscript would benefit from a deeper discussion of the MLDE methodology, particularly in the context of fine-tuning pre-trained protein language models like AugmentedESM(https://www.nature.com/articles/s41587-021-01146-5). A comparative analysis between mutaPLM and MLDE-based methods(e.g. AugmentedESM) could provide more clarity on their respective performances.\n\n3. Based on 2, further exploration of the role of textual descriptions in enhancing model performance would be advantageous. Clarification on how these descriptions integrate with the model to improve predictions would be helpful.\n\n4. The performance of the model on regression tasks remains unclear. It would be instructive for the authors to include results or discuss how the model handles quantitative predictions in the context of protein functionalities." }, { "confidence": 4, "rating": 6, "review_id": "sTuAU6TluA", "review_text": "The paper proposes a framework to 1). generate text-based mutation effects for mutated proteins and 2). propose new mutated sequences based on the function descriptions. The main module is an encoder-decoder network, which encodes the representations of mutated sequences and outputs the position and amino acid of the mutation. The network is first pretrained on the protein literatures and then fine-tuned on the mutation effects.\n\n* The problem studied in this paper is novel and well-motivated: generate mutated sequences conditioning on the instructions, and generate mutation effects conditioning on the sequences.\n* The method is technically sound. \n* The paper is well-structured\n\nMost issues are on the evaluation side. Rigorous evaluations are very important for the AI4Science applications. \n* Baseline Selection: The paper employs weak baselines for comparison. None of the baselines used have been specifically trained on mutations. This makes it difficult to accurately assess the true effectiveness of the method.\n* Lack of Temporal Evaluation: While the paper adopts a structural split for evaluation, which is acceptable, a temporal-based evaluation would be more ideal and realistic. A temporal split, where some proteins are held out based on their discovery time, would more accurately reflect real-world scenarios in scientific applications. \n* Weak Evaluation of Mutation Explanations: The use of GPT-4 to assess scientific explanations is not robust or scientifically sound.\n* Missing experimental details. The paper omits several crucial experimental details, which harms reproducibility and thorough understanding of the methodology. Specific areas lacking detail include:\n 1. explain in details how you tune the hyperparameters\n 2. what is the dataset for protein literatures?\n 3. When construct MutaDescribe, did you only use swissprot or the whole dataset? how did you extract the mutation explanations? How do you know whether it's expert-reviewed?\n\nSee above." } ]
ypggxVWIv2
GTBench: Uncovering the Strategic Reasoning Capabilities of LLMs via Game-Theoretic Evaluations
As Large Language Models (LLMs) are integrated into critical real-world applications, their strategic and logical reasoning abilities are increasingly crucial. This paper evaluates LLMs' reasoning abilities in competitive environments through game-theoretic tasks, e.g., board and card games that require pure logic and strategic reasoning to compete with opponents. We first propose GTBench, a language-driven environment composing 10 widely-recognized tasks, across a comprehensive game taxonomy: complete versus incomplete information, dynamic versus static, and probabilistic versus deterministic scenarios. Then, we (1) Characterize the game-theoretic reasoning of LLMs; and (2) Perform LLM-vs.-LLM competitions as reasoning evaluation. We observe that (1) LLMs have distinct behaviors regarding various gaming scenarios; for example, LLMs fail in complete and deterministic games yet they are competitive in probabilistic gaming scenarios; (2) Most open-source LLMs, e.g., CodeLlama-34b-Instruct and Llama-2-70b-chat, are less competitive than commercial LLMs, e.g., GPT-4, in complex games, yet the recently released Llama-3-70b-Instruct makes up for this shortcoming. In addition, code-pretraining greatly benefits strategic reasoning, while advanced reasoning methods such as Chain-of-Thought (CoT) and Tree-of-Thought (ToT) do not always help. We further characterize the game-theoretic properties of LLMs, such as equilibrium and Pareto Efficiency in repeated games. Detailed error profiles are provided for a better understanding of LLMs' behavior. We hope our research provides standardized protocols and serves as a foundation to spur further explorations in the strategic reasoning of LLMs.
https://openreview.net/pdf/1616ae3f3c1970951b0401486556c3a49f3df00c.pdf
[ { "confidence": 4, "rating": 4, "review_id": "4dpPjtSuq7", "review_text": "This paper tries to evaluate the strategic reasoning abilities of LLM. Therefore, 10 games are chosen where LLMs is trying to solve the game. This paper includes various open- and closed-source LLMs into consideration and build a benchmark for easy evaluation.\n\nEvaluating the strategic reasoning is important and the evaluation includes various LLMs into consideration.\n\nThe evaluation protocol is questionable. More comments and questions are in the following section.\n\n1. Does the evaluation really evaluate the strategic reasoning? Basically the evaluation is letting the LLM to play as one of the player in the game. However, this is much like a decision making problem, especially when the opponent is also a LLM-agent, where the LLM agent is largely stationary. Therefore, I would like to ask the authors provide the justification about why the evaluation is about strategic reasoning, not the decision making? \n\n2. Also about the strategic reasoning. The selected games only focus on competitive zero-sum games. what about general-sum and multi-player games? Are cooperative games, e.g., hanabi, also requiring strategic reasoning? Even further, mixed cooperative and competitive, e.g., soccer, need the strategic reasoning? I think the strategic reasoning is not well-defined and fully discussed. \n\n3. Does the evaluation really unlock the abilities of LLMs? The evaluation is focusing the prompting. However, for games, especially unfamiliar games for LLMs, exploration is important. Therefore, a memory or long in-context learning of the exploration experience should be included for the evaluation of strategic reasoning in games." }, { "confidence": 4, "rating": 7, "review_id": "zdT1ypHD1c", "review_text": "This paper proposes a benchmark for evaluating the strategic reasoning of LLMs. The benchmark includes ten games of various types. The authors use these games to conduct competitive experiments between LLMs and traditional methods, as well as LLM-vs.-LLM. The paper then analyzes the experimental results and model behavior, and examines the game-theoretic properties of LLMs.\n\n1. The paper is logically clear, understandable, and well-written.\n2. The experiments are comprehensive. The authors evaluate comparisons between LLMs and traditional methods and LLM-vs.-LLM competitions. They include multiple open-source and closed-source models and tests of various prompting methods.\n3. The authors evaluate game-theoretic properties, including Nash equilibrium with regret and Pareto efficiency.\n\nI didn't find any significant weaknesses, only a few questions.\n\n1. In Section 4.1, why does the tree-like prompting strategy ToT still lag significantly behind MCTS?\n2. Is there any reference to classifying games in the benchmark? Why is it classified this way?\n3. Why does the model perform better in probabilistic and dynamic games than in completely deterministic games? Is it that LLM performs better or that MCTS performs worse, making LLM appear better?" }, { "confidence": 4, "rating": 7, "review_id": "U12aTHNgPA", "review_text": "The paper proposes a benchmark to understand the strategic reasoning capabilities of llms. The authors present a suite of game theoretic tasks with different structures to do this. They use different evaluation metrics like ELOs and Relative advantage to compare different llms and prompting methods.\n\n- The paper is clearly written and well motivated. It provides some structure to the growing literature of strategic reasoning with llms.\n- A wide range of closed source, open source models are tested. A good set of prompts are used to test the models too!\n- I particularly liked table 1 and the selection of different tasks with different characteristics.\n- The normalized relative advantage is a good, interpretable metric\n- The framework and taxonomy are clear and easy to understand.\n- Section 4.4 gave some good insight into the types of errors made by llms\n- I also liked reading the analysis in section 4.3, in particular that code pretraining helps with strategic reasoning.\n\n- Characterizing human performance would strengthen the paper\n- Including some qualitative reasoning traces of successes and failures might be insightful.\n- Minor: This paper would be an ideal fit for the datasets and benchmarks track, instead of the main track. I dont think it should be penalized for this though!\n\nTypos\n\nLine 79: Characterize\n\nLine 171: dynamic gaming → dynamic game\n\nSee weaknesses." }, { "confidence": 4, "rating": 6, "review_id": "IuXeFqX6t2", "review_text": "This paper introduces GTBench, a set of 10 different games to test how well large language models can think strategically. The author found that while LLMs struggle with complete and deterministic games like Tic-Tac-Toe and Connect-4, they perform better in incomplete uncertain games like poker and negotiation. Code-pretraining improves their strategic thinking abilities. However, advanced thinking methods like Chain-of-Thought and Tree-of-Thought don’t always help and can sometimes make things worse. The latest open-source models, like Llama-3, are getting closer in performance to commercial models like GPT-4. Common mistakes LLMs make include misunderstanding game rules, being over-confident, and making calculation errors.\n\n1. The paper is well-written and easy to understand. \n2. The problem of evaluating LLMs' strategic reasoning abilities is meaningful. Creating such a benchmark is valuable for the research community.\n3. The paper provides a detailed evaluation of LLMs across different game tasks. These tasks indeed measure the strategic reasoning of LLMs, even if some models already understand the optimal algorithms for those games. (For example, you could ask GPT-4 about the optimal strategy for some of these games, and it knows the optimal algorithm.)\n4. The authors conducted extensive experiments using various base models, including reasoning methods like ToT and CoT. They had some interesting findings and analysis (concluded in the summary).\n\n1. The paper claims that measuring strategic reasoning capabilities with games is missing in existing benchmarks. However, there are other benchmarks, such as MAgIC released last year, that consider benchmarking LLMs' strategic behavior using games. While there are differences, this weakens the claim of novelty.\n2. Some of the selected games, like Tic-Tac-Toe, have known optimal strategies and are not complex enough. These games might not fully challenge the advanced strategic reasoning capabilities of LLMs. Even though the current evaluation is useful, as a benchmark intended for future use, it should be capable of evaluating more advanced or adapted LLM agents.\n3. The benchmark focuses on a set of 10 games. It’s unclear how well the findings generalize to other strategic scenarios, even similar types of tasks. The results appear to be quite case-by-case. A broader range of tasks and scalable evaluation frameworks would make the benchmark more comprehensive.\n4. The experiments primarily involve LLMs and traditional solvers. There is a lack of evaluation against human opponents, which could provide more insights into the models' performance in real-world strategic interactions. As a benchmark, I also expect to have other opponents (for example, the optimal algorithm, the RL based agent).\n\nCould you address the weakness 1, and try to discuss weakness 2-4?" } ]
ypaqE8UwsC
Federated Ensemble-Directed Offline Reinforcement Learning
We consider the problem of federated offline reinforcement learning (RL), a scenario under which distributed learning agents must collaboratively learn a high-quality control policy only using small pre-collected datasets generated according to different unknown behavior policies. Na\"{i}vely combining a standard offline RL approach with a standard federated learning approach to solve this problem can lead to poorly performing policies. In response, we develop the Federated Ensemble-Directed Offline Reinforcement Learning Algorithm (FEDORA), which distills the collective wisdom of the clients using an ensemble learning approach. We develop the FEDORA codebase to utilize distributed compute resources on a federated learning platform. We show that FEDORA significantly outperforms other approaches, including offline RL over the combined data pool, in various complex continuous control environments and real-world datasets. Finally, we demonstrate the performance of FEDORA in the real-world on a mobile robot. We provide our code and a video of our experiments at \url{https://github.com/DesikRengarajan/FEDORA}.
https://openreview.net/pdf/ebb3c4a49c535b4cabc2bd5d7686f30b108d2a55.pdf
[ { "confidence": 4, "rating": 6, "review_id": "j092eJ518S", "review_text": "This paper proposed the Federated Ensemble-Directed Offline Reinforcement Learning Algorithm. The combination of offline RL and federated learning is interesting in addressing the training data insufficiency issue due to small pre-collected datasets.\n\nThe originality of this paper is relatively good, since the proposed Federated Ensemble-Directed Offline Reinforcement Learning Algorithm is effective in offline reinforcement learning. The quality and clarity are also clear, and this paper is actually well-written. The significance of this paper is obvious, because offline reinforcement learning is important in real-world scenarios.\n\n1. Some technical details need to be explained. For example, the ensemble learning and its role.\n2. The novelty of this paper needs further clarification, and what is the main difference between this proposed method and existing studies? It seems that there is only a simple combination of two technologies.\n3. Numerically, the authors could consider comparing their method with more baselines. There are some studies on federated learning for offline RL.\n\n1. Some technical details need to be explained. For example, the ensemble learning and its role.\n2. The novelty of this paper needs further clarification, and what is the main difference between this proposed method and existing studies? It seems that there is only a simple combination of two technologies.\n3. Numerically, the authors could consider comparing their method with more baselines. There are some studies on federated learning for offline RL." }, { "confidence": 4, "rating": 7, "review_id": "UBTqYwAk3j", "review_text": "The authors identify fundamental challenges for Federated Offline Reinforcement Learning and present Fedora, an approach that tackles each of them. They perform extensive evaluation of the approach on Mujoco and real-world datasets showing improved performance over existing work.\n\nThe paper is well-written and, importantly, the code has been shared. The authors run extensive experiments. The work is novel and the notion of federated optimism is particularly interesting. Federated offline RL is an important research area with vast real-world applicability. The algorithm has been shown to be robust to diverse/ heterogenous client datasets. It is also commendable that the approach was tested on a real-world robot.\n\nNo theoretical guarantees have been given for the algorithm though it does build upon foundational work. I believe that the authors should explicitly discuss limitations/ opportunities for future work in the paper. It is important for the algorithm pseudocode to be included in the main material as is the norm in such papers. I believe that there are perhaps many experiments included the main paper meaning that the discussion/ hypotheses for results is somewhat diluted. \nAnother minor issue is that the figures are placed very far away from where they are referred to in text.\n\n* How far do the authors perceive that this model can be pushed? i.e. the assumption that all clients have the same MDP is restrictive but understandable for a first set of experiments.\n\n* Have any experiments been run using D4RL-random datasets? It would be interesting to see whether this collapses learning.\nWith regards to FEDORA outperforming centralised training I think a deeper discussion on this would be useful. \n\n* What is the main reason for this? Heterogenous data though previous work has successfully mixed datasets: https://arxiv.org/abs/2106.06860" }, { "confidence": 4, "rating": 8, "review_id": "TYNoqwNsax", "review_text": "This paper presents the Federated Ensemble-Directed Offline Reinforcement Learning Algorithm (FEDORA), a novel approach for collaborative learning of high-quality control policies in a federated offline reinforcement learning (RL) setting. The paper identifies key challenges in federated offline RL, including ensemble heterogeneity, pessimistic value computation, and data heterogeneity. To address these issues, FEDORA estimates the performance of client policies using only local data and, at each round of federation, produces a weighted combination of the constituent policies that maximize the overall offline RL objective, while maximizing the entropy of the weights. Besides the core idea, FEDORA also performs data pruning\n\n1. This is a novel work proposing the first federated offline RL algorithm in the general case (without assuming linearity). The paper is very well written with clear motivations and detailed discussions on the insufficiency of existing, naive approaches. \n\n2. The experiments are also very thorough and convincing with experiments ranging from simple 2D environments to high-dimensional continuous control problems. The algorithm is also tested on a real-world robot platform, which is very impressive given the density of algorithmic contributions in the paper.\n\n1. \"Collect wisdom\" can be replaced by more rigorous exposition. Same goes with \"ambitious targets\". \n\n2. The number of communication rounds needed for FEDORA to converge is still quite high. \n\n3. Given how well the algorithm does, some sort of theoretical analysis could further strengthen the work.\n\nMy questions are stated above." } ]
ypPzyflbYs
Neural Concept Binder
The challenge in object-based visual reasoning lies in generating concept representations that are both descriptive and distinct. Achieving this in an unsupervised manner requires human users to understand the model's learned concepts and, if necessary, revise incorrect ones. To address this challenge, we introduce the Neural Concept Binder (NCB), a novel framework for deriving both discrete and continuous concept representations, which we refer to as "concept-slot encodings". NCB employs two types of binding: "soft binding", which leverages the recent SysBinder mechanism to obtain object-factor encodings, and subsequent "hard binding", achieved through hierarchical clustering and retrieval-based inference. This enables obtaining expressive, discrete representations from unlabeled images. Moreover, the structured nature of NCB's concept representations allows for intuitive inspection and the straightforward integration of external knowledge, such as human input or insights from other AI models like GPT-4. Additionally, we demonstrate that incorporating the hard binding mechanism preserves model performance while enabling seamless integration into both neural and symbolic modules for complex reasoning tasks. We validate the effectiveness of NCB through evaluations on our newly introduced CLEVR-Sudoku dataset.
https://openreview.net/pdf/83a61e046b4272eb1e838707fd28087549cbe396.pdf
[ { "confidence": 4, "rating": 5, "review_id": "TJSL21C0tb", "review_text": "The paper proposes a novel approach to unsupervised concept learning based on both continuous and discrete encodings. Neural Concept Binder (NCB) allows humans inspecting and revising the learnt concepts. In the experiments, NCB’s discrete concept encodings result as expressive as the continuous encodings. Also, NCB can be integrated with symbolic and sub symbolic module. Finally, to support the experimental evaluation the paper introduces a novel dataset CLEVER-Sudoku, very suitable for neuro-symbolic benchmarking.\n\n-\t**Novelty**: the proposed approach, although based on existing works SySBinder and Slot attention, is surely novel in the field of concept learning and potentially very relevant as it may strongly facilitate the extraction and discovery of unsupervised concepts. Particularly the possibility to revise concepts is completely novel to the best of my knowledge and very useful to improve human-computer interaction.\n-\t**Novel resource** presented: CLEVER-Sudoku will be surely an important resource for the Neuro-symbolic literature.\n\n## Major issues:\n * Method presentation:\n - The way in which block-slot-encodings are obtained, is badly presented. Although it is based on previous literature, since it is a key architectural component, it should have been presented more in details. I suggest the author to employ a background section to report the way in which slot attention and sys binder work, in order to make the paper self-contained. \n - Figure 2 which illustrates the core of the method is quite confusing: it is not clear how the discrete concepts are actually represented (the concept slot encodings reported are positive continuos representations). Also, the reminders to the figures in the text do not help as they generically refer to the entire figure and not to a specific block. A color coding of the different parts of the model could help understanding. \n - How does the $\\texttt{enc}_l^j$ work is not clear. What does it receive in input? Where it is extracted from?\n - All the revision operations are definitely not clear. The formal operation to be executed is often confusing.\n * Experimental evaluation:\n - Models: NCB has been compared only against SysBinder. While it is a very novel and innovative method, there is a complete lack of benchmarking against standard unsupervised concept-based approaches such as SENN[1], BotCL[2], ACE[3]. Comparing against supervised approaches such as CBM[4] or CEM[5] could have been also useful. \n - Datasets: NCB is only tested on variants of CLEVER. While it is surely an interesting benchmark, real-world benchmarks are missing. Experiments on CUB or CELEBA, for instance, would have been very appreciated to better understand the scalability of the approach.\n\n## Minor issues\n * Related work:\n - The unsupervised concept learning literature does not review several important concept-based paper working both post-hoc and explainable by design. Some examples are SENN[1], ACE[3], VAEL[6], as well as notorious prototype-based approaches such as Prototype layer [7] and ProtopNets[8]. \n - Unlike you state, continuous and discrete representations have been combined in recent literature for supervised concept learning. Some examples are CEM[5] and ProbCBM[9].\n * Unclear sentences:\n - “Briefly, given an image x, NCB derives a symbolic representation, c, which expresses the concepts of the objects in the image, i.e., object-factor level concepts. Herefore, NCB infers a block-slot encoding, z, of the image and performs a retrieval-based discretization step to finally infer concept-slot encodings, c”. The consequentiality of the inference process is misleading from this sentence. \n * Method Inspection. What the authors refer as implicit, comparative, interventional and similarity-based inspections are normally referred to as example-based explanations (implicit and similarity-based) and counterfactual explanations (comparative and interventional). Sticking to well-known terms in literature is a good choice to avoid further confusion in the reader. \n\nOverall, I think it's an interesting paper proposing a novel approach to unsupervised concept learning. However, I think it will benefit from a further revision to deeply improve method presentation and expand the experimental campaing including other standard unsupervised concept-learning approaches and datasets.\n\n[1] Alvarez Melis, David, and Tommi Jaakkola. \"Towards robust interpretability with self-explaining neural networks.\" Advances in neural information processing systems 31 (2018).\n\n[2] Wang, B., Li, L., Nakashima, Y., and Nagahara, H. “Learning bottleneck concepts in image classification”. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (2023).\n\n[3] Ghorbani, Amirata, et al. \"Towards automatic concept-based explanations.\" Advances in neural information processing systems 32 (2019).\n\n[4] Koh, Pang Wei, et al. \"Concept bottleneck models.\" International conference on machine learning. PMLR, 2020.\n\n[5] Espinosa Zarlenga, Mateo, et al. \"Concept embedding models: Beyond the accuracy-explainability trade-off.\" Advances in Neural Information Processing Systems 35 (2022): 21400-21413.\n\n[6] Misino, Eleonora, Giuseppe Marra, and Emanuele Sansone. \"Vael: Bridging variational autoencoders and probabilistic logic programming.\" Advances in Neural Information Processing Systems 35 (2022): 4667-4679.\n\n[7] Li, Oscar, et al. \"Deep learning for case-based reasoning through prototypes: A neural network that explains its predictions.\" Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 32. No. 1. 2018.\n\n[8] Chen, Chaofan, et al. \"This looks like that: deep learning for interpretable image recognition.\" Advances in neural information processing systems 32 (2019).\n\n[9] Kim, Eunji, et al. \"Probabilistic Concept Bottleneck Models.\" International Conference on Machine Learning. PMLR, 2023.\n\nA few questions to try to understand better the notation employed to define the revision operations.\n- What do the authors mean with $v_l \\rightarrow v_m$? \n- What does the add operation work and how one can provide an encoding for a concept and be sure the network employs it as intended?" }, { "confidence": 5, "rating": 4, "review_id": "OnehlmRqIN", "review_text": "This paper introduces neural concept binder, a neural symbolic framework that utilizes both soft and hard binding. Building on top of the sysbinder model, it can additionally do exemplar-based hard binding and revise concepts. Evaluations made on CLEVR and the proposed CLEVR-Sudoku dataset proved the method's validity.\n\n- The paper is well-written and easy to read. Connections to previous works are clarified nicely.\n\n- It's good to see the incorporations of both hard and soft bindings to existing neural-symbolic frameworks.\n\n- The model achieved good performance on the proposed CLEVR-Sudoku task and can do satisfactory concept revision and inspection, which is a neat proof of concept that hard binding works.\n\nThere are several weaknesses I can foresee that may lead to the rejection of this paper.\n\n- Limited contribution: After so many years of developing neural-symbolic methods in visual reasoning, from the earliest modular approaches to unsupervised concept learners, code-based reasoning models, and recent visual programming-like frameworks, the goal of neural-symbolic modeling has dramatically changed. In this work, the neural concept binder still focuses on one of the earliest task categories designed for visual reasoning (CLEVR attribute classifications or unsupervised concept learners). It's also built on top of sysbinder, in other words, it's merely an incremental improvement by adding a retrieval-based library.\n\n- I don't see any generalizability of this method beyond extremely toy tasks (attribute classification). The proposed CLEVR-Sudoku is strange and does not correspond to any of the real-world visual reasoning tasks. Relational tasks are also not tackled in this paper.\n\n- How generalizable can this method be? Can it serve as a part of the closed-loop reasoning in CLEVR (I mean, the original CLEVR questions, as tackled in NS-CL and NS-VQA line of work)?\n\n- Can relational concepts be similarly represented via soft/hard binding?" }, { "confidence": 3, "rating": 7, "review_id": "gWhD5j0YWx", "review_text": "The authors introduced a pioneering framework that combines an object-centric learning module with a retrieval-based module to address visual reasoning tasks and a new visual reasoning task, CLEVR Sudoku. The proposed method demonstrated significant potential in effectively acquiring inspectable and revisable concepts via human or machine feedback in various scenarios.\n\n- S1: The proposed method offers significant novelty in that it has the potential to serve as a building block for concept learning, which can be leveraged as a core module in other frameworks. The author's well-structured experiments provided compelling evidence in support of these claims.\n\n- W1: The proposed approach can be interpreted as directly integrating SysBinder and HDBSCAN. Because the initial concept detection fundamentally depends on the complete functionality of SysBinder, this framework may not circumvent specific inherent challenges of object-centric learning, including inductive bias resulting from choosing the proper object-factor encoder and identifiability issues.\n- W2: Using HDBSCAN is intuitive in the proposed method, but it would be beneficial to include an additional experiment that compares different clustering methods.\n\nPlease check out the Weakness section first. I listed the following questions and suggestions that would be helpful for authors' future works:\n- Q1: The recent method [1] in object-centric learning literature is linked to causal representation learning and the identifiability of slot representations. How can this be integrated into your framework? \n- Q2: Object-factor learning can be interpreted as learning atoms in logic, and the NN explanations in Table 2 can be seen as the simple form of propositional logic in a neuro-symbolic framework. How can an object-centric learning framework be extended to represent logical rules, such as in the form of first-order logic?\n\nReference\n- [1] Mansouri, Amin, et al. \"Object-centric architectures enable efficient causal representation learning.\" arXiv preprint arXiv:2310.19054 (2023)." }, { "confidence": 3, "rating": 6, "review_id": "U0HGZEIJjj", "review_text": "This paper introduces Neural Concept Binder, a framework for obtaining discrete concept representations from images without any supervision. The method is an extension of Neural Systematic Binder (SysBinder), adding a clustering step on top of the block-slot representations to obtain discrete concept representations. The resulting representations are interpretable and modifiable, as shown in the experiments. The model is additionally evaluated on property prediction and downstream tasks on modifications of the CLEVR dataset and shown to be able to leverage fewer training samples than SysBinder.\n\nThe paper investigates an important problem in learning discrete, interpretable concepts from images in an unsupervised way. The model is a logical extension of SysBinder in clustering the representations to obtain discrete concepts. The experiments show improvements in sample efficiency of these discrete representations over SysBinder’s continuous representations.\n\n1. Since the solver used in the Sudoku experiments is the same across all baselines, it seems the determining factor of performance is in how well the digits are classified. Therefore, I do not believe framing this evaluation in the context of Sudoku adds any insight—in fact it seems to add unnecessary noise to the evaluation. The evaluation in the appendix (Figure 8) seems more informative and sufficient for determining the benefit of NCB. \n2. This paper is missing several related citations: Unsupervised Concept Discovery Mitigates Spurious Correlations (https://arxiv.org/abs/2402.13368) and Neural Language of Thought Models (https://arxiv.org/abs/2402.01203). NLoTM is particularly relevant and can be an additional baseline since it also extends SysBinder to learn discrete representations, except it is trained in an end-to-end way.\n3. The discussion in section 3.3 is interesting, but it would be informative to tie each point with corresponding experimental evidence.\n\n1. For SysBinder (hard) and SysBinder (step), do the models train well with this temperature adjustment? E.g. do they exhibit decent reconstruction and slot decomposition?\n2. I’m not sure if I completely understand the analysis for Q4, but since this is done on the concept encodings, and not the discrete representations, can the same analysis be done with SysBinder representations? If so, does NCB offer any additional benefits here?\n3. How important is the choice of clustering algorithm to the results? What if we use a simple k-means clustering as is done in the original SysBinder paper?" } ]
ypFgcT147Z
Decoupling Semantic Similarity from Spatial Alignment for Neural Networks.
What representation do deep neural networks learn? How similar are images to each other for neural networks? Despite the overwhelming success of deep learning methods key questions about their internal workings still remain largely unanswered, due to their internal high dimensionality and complexity. To address this, one approach is to measure the similarity of activation responses to various inputs. Representational Similarity Matrices (RSMs) distill this similarity into scalar values for each input pair. These matrices encapsulate the entire similarity structure of a system, indicating which input lead to similar responses. While the similarity between images is ambiguous, we argue that the spatial location of semantic objects does neither influence human perception nor deep learning classifiers. Thus this should be reflected in the definition of similarity between image responses for computer vision systems. Revisiting the established similarity calculations for RSMs we expose their sensitivity to spatial alignment. In this paper we propose to solve this through _semantic RSMs_, which are invariant to spatial permutation. We measure semantic similarity between input responses by formulating it as a set-matching problem. Further, we quantify the superiority of _semantic_ RSMs over _spatio-semantic_ RSMs through image retrieval and by comparing the similarity between representations to the similarity between predicted class probabilities.
https://openreview.net/pdf/d22a2517d54b412df61755b57dfc902e0053fba1.pdf
[ { "confidence": 3, "rating": 6, "review_id": "6DQ75fZQIm", "review_text": "In this paper, the authors propose a new method to measure similarity between responses of deep neural networks in vision. They reformulate the commonly used strategy to compute Representational Similarity Matrices (RSMs) by acknowledging the superiority of the semantic information over the spatio-semantic information in the creation of RSMs. The authors perform different experiments to show the improvement over the baseline method caused by their reformulation.\n\n- The paper is clearly written and tackles an important topic.\n- The idea is original and besides its limitation connected to high computational needs, it could serve as an inspiration for future works.\n- Although the experiments are not exhaustive, they are convincing and coherent and the gained insights seem relevant.\n- Being transparent with the limitation of the method and trying to provide the means to mitigate it is a plus.\n\n- The authors should better discuss the differences between the results obtained for ViTs and CNNs in their study, which are quite well visible. E.g. while in the case of an examined CNN, the spatio-semantic RSM does not reflect well the similarities between translated images, in the case of the examined ViT (appendix), these similarities can be observed. The other thing is that the experiment is slightly different, because in the experiment with CNNs much smaller images are used than in the ViT experiment. Also, the differences are also visible in Table 1 (ResNets obtain much higher absolute correlation values for the baseline and the proposed methods than ViTs). \n- The authors provided few visual examples of the results of their method. It would be good to provide more of them (e.g. for different similarity metrics used, for more images and for more netorks) to enable more comprehensive qualitative evaluation (they could be placed in the appendix). \n- The use of some methods at work is not well justified (e.g. Pearson correlation).\n\n- Why do the authors use Pearson correlation to examine the relationship between the Jensen-Shannon Divergence and the representational similarity? E.g. Kendall/Spearman correlation can be more robust.\n- The authors should focus on the differences between the results obtained for CNNs and ViTs (see the comment in the weaknesses section). \n- The statement in the introduction “we argue that the spatial location of semantic objects does neither influence human perception nor deep learning classifiers.” is a little bit too bold - the paper does not examine human perception, therefore it would be better to leave only the deep learning here. \n- Also, a minor thing is that some typos, grammatical and formatting errors can be found in the paper (e.g. the sentence starting in l79, l205: a RSM -> an RSM, l25: SAMor CLIPSeg ,the retrieval performance)\n- The authors could provide more examples of their method for different networks to enable their better qualitative assessment which is now limited (e.g. in the appendix)." }, { "confidence": 3, "rating": 6, "review_id": "hHZT2eJs5z", "review_text": "This paper proposes Semantic RSMs to understand the internal representations in deep neural networks. The authors argue that the current RSMs are limited by their coupling of semantic and spatial information, which restricts the assessment of similarity. The proposed semantic RSMs are spatial permutation invariant and focus solely on semantic similarity. The proposed method is shown to enhance retrieval performance and provide a more accurate reflection of the predictive behavior of classifiers.\n\n1. This paper is well-written and easy to follow.\n2. The introduction of semantic RSMs is a significant contribution, potentially leading to more meaningful comparisons between neural network models.\n3. The empirical demonstration of improved retrieval performance using semantic RSMs is convincing and adds practical value to the theoretical development.\n\n1. While the paper does highlight the high computational complexity as a limitation, it would benefit from a more detailed discussion on the scalability of the proposed method to larger models and datasets and the approximation error.\n\nI'm not an expert in this field, so I tend to start by looking at what other reviewers think of the paper." }, { "confidence": 3, "rating": 5, "review_id": "PKbHW0M8sb", "review_text": "The authors introduce semantic RSMs, which are designed to be invariant to the spatial arrangement of elements within images. These semantic RSMs assess similarity by treating the problem as one of set-matching, where the focus is on matching semantic content rather than spatial details. This approach not only aligns more closely with human perception but also improves the relevance and accuracy of image retrieval tasks. The paper claims that semantic RSMs offer a more robust measure of similarity by comparing them to traditional spatio-semantic methods.\n\n- The focus on semantic content rather than spatial arrangement aligns more closely with human perception, potentially leading to more intuitive and relevant comparisons of neural network responses.\n\n- By being invariant to spatial permutations, this method can effectively compare images where the same objects appear in different locations.\n\n- Semantic RSMs can be used as drop-in replacements for traditional RSMs.\n\n- Employing algorithms like Hungarian matching to find the optimal permutation matrix can be computationally expensive.\n\n- The effectiveness of this approach relies heavily on accurate identification and parsing of semantic concepts within images, which can be challenging in complex scenes or under conditions of visual ambiguity.\n\n- While focusing on semantic content is generally advantageous, completely ignoring spatial information can sometimes omit useful contextual cues that contribute to overall image understanding. For example, \n\n[contextual cues] a picture of a dining table with plates, utensils, and food arranged in a specific way might convey a meal setting, which could be lost if the spatial relationships are ignored. \n\n[object interactions] Images where interactions between objects are important, such as a cat sitting on a mat, might lose their interpretative meaning if spatial information is disregarded. The semantic content (cat, mat) remains the same, but the relationship changes based on their arrangement.\n\n[abstract content] In abstract art or images with non-literal interpretations, spatial composition itself can carry meaning and affect how the content is perceived and classified.\n\n- How well does the method scale to very large datasets or to more complex neural networks that handle highly varied or abstract visual content?\n\n- How does the method perform under noisy conditions or when semantic parsing is imperfect due to occlusions or poor image quality?" }, { "confidence": 2, "rating": 5, "review_id": "gbfxyQ8BC9", "review_text": "This paper makes a contribution to the construction of RSMs in the field of vision neural networks and puts forward the concept of semantic RSMs, which is innovative and theoretical.\n\nThe proposed semantic RSMs are used for spatial alignment by means of optimal permutation, which is a relatively new and promising method.\n\nThis paper verifies the validity of semantic RSMs through experiments such as image retrieval and probabilistic similarity comparison. An in-depth analysis of the experimental results is carried out, and the advantages of semantic RSMs in specific tasks are pointed out.\n\nThis paper lacks the experimental verification of specific downstream tasks, such as detection and segmentation, on semantic RSMs. I need to know which scenario is more suitable for RSMs and semantic RSMs.\n\nLack of quantitative comparative data. It is suggested to add tables or charts to show specific performance comparison data between semantic RSMs and existing methods in different tasks (such as image retrieval, class probability similarity comparison, etc.), including accuracy, time complexity and other indicators.\n\nThe discussion of the experimental results was not thorough enough. It is recommended to add a detailed analysis of the experimental results to explain why semantic RSMs perform better on certain tasks, as well as possible reasons and limitations.\n\n \"aligns\" to \"align\" in line 57.\n\nIt is suggested to further elaborate the potential and specific scenarios of the research in practical applications to enhance readers' understanding of its practical value." } ]
ypEamFKu2O
PGN: The RNN's New Successor is Effective for Long-Range Time Series Forecasting
Due to the recurrent structure of RNN, the long information propagation path poses limitations in capturing long-term dependencies, gradient explosion/vanishing issues, and inefficient sequential execution. Based on this, we propose a novel paradigm called Parallel Gated Network (PGN) as the new successor to RNN. PGN directly captures information from previous time steps through the designed Historical Information Extraction (HIE) layer and leverages gated mechanisms to select and fuse it with the current time step information. This reduces the information propagation path to $\mathcal{O}(1)$, effectively addressing the limitations of RNN. To enhance PGN's performance in long-range time series forecasting tasks, we propose a novel temporal modeling framework called Temporal PGN (TPGN). TPGN incorporates two branches to comprehensively capture the semantic information of time series. One branch utilizes PGN to capture long-term periodic patterns while preserving their local characteristics. The other branch employs patches to capture short-term information and aggregate the global representation of the series. TPGN achieves a theoretical complexity of $\mathcal{O}(\sqrt{L})$, ensuring efficiency in its operations. Experimental results on five benchmark datasets demonstrate the state-of-the-art (SOTA) performance and high efficiency of TPGN, further confirming the effectiveness of PGN as the new successor to RNN in long-range time series forecasting. The code is available in this repository: https://github.com/Water2sea/TPGN.
https://openreview.net/pdf/fedcae5d40b0b77e815930762cf718eaa6edbdff.pdf
[ { "confidence": 4, "rating": 6, "review_id": "QzOGXcqoUk", "review_text": "This paper proposed a Parallel Gated Network (PGN) as a successor to RNN, featuring a Historical Information Extraction (HIE) layer to directly capture information from previous time steps. Additionally, it introduces a Temporal PGN (TPGN) framework with two branches to capture both long-term periodic and short-term semantic patterns, demonstrating state-of-the-art performance in long-range time series forecasting.\n\n1. This paper compares a variety of cutting-edge methods.\n2. The experiments are generally thorough.\n\n1. The major issue with this paper is the lack of analysis and comparison with significant literature. The entire paper's premise is the traditional RNN's failure in long-term sequence problems due to the long information propagation paths of its recurrent structure. However, as far as I know, SegRNN[1] has already addressed these shortcomings of traditional RNN in long-term forecasting through segmented iteration and parallel prediction. Yet, there is no discussion on this in the paper. Please compare your method with it and clarify your differences and advantages.\n\n2. In Section 2, you should distinguish between Linear-based and MLP-based methods. The former has only single-layer parameter connections, while the latter has multiple layers and can learn non-linear features due to the presence of activation functions. Methods like DLinear and FITS should be classified as Linear-based methods.\n\n3. The description of HIE is unclear: (i) The process shown in Figure 2(a) suggests first performing linear mapping and then zero-padding, which conflicts with Equation 1 in the paper, where H = HIE(Padding(X)), and the actual code. It is recommended to modify Figure 2 to make this clearer. (ii) Line 169 describes that “HIE(·) is a linear layer,” but in practice, the behavior of HIE is more like a sliding aggregation operation of CNN (or TCN) rather than a sorely linear mapping. Given **(ii)**, calling the proposed method RNN-based is debatable since it is more likely TCN-based. \n\n4. You should include an ablation analysis of the normalization layer, explaining its impact on TPGN achieving state-of-the-art results.\n\n5. Although the authors provide source code, it does not include the hyperparameter settings required to reproduce the key results in the paper, meaning there is no directly runnable script. Are the hyperparameters in the main results all defaults? For instance, is TPGN_period=24? If not, providing a complete script file that can be run directly is necessary.\n\n\n[1] Lin, S., Lin, W., Wu, W., Zhao, F., Mo, R., & Zhang, H. (2023). Segrnn: Segment recurrent neural network for long-term time series forecasting. arXiv preprint arXiv:2308.11200\n\nSee Weaknesses." }, { "confidence": 4, "rating": 7, "review_id": "FVXeiAOY8u", "review_text": "This paper focuses on long-range time series forecasting problems. To address the limitations of RNNs, a novel paradigm called PGN is introduced as an alternative, providing shorter information propagation paths. Building upon PGN, the paper further presents a generic temporal modeling framework named as TPGN, which effectively captures both long-term and short-term periodic patterns, as well as local and global information, through a dual-branch design. The experimental results in this paper demonstrate that TPGN exhibits excellent performance in time series forecasting.\n\nS1: This paper proposed a novel paradigm called PGN, which effectively tackles the inherent issues of RNNs through a simple yet powerful design. PGN exhibits a high level of innovation and holds the potential to replace traditional RNNs.\n\nS2: TPGN primarily focuses on modeling the temporal dimension. Its dual-branch design makes sense as it captures both long-term and short-term periodicity, as well as the local and global characteristics of time series. Additionally, it is reasonable to set up different univariate forecasting tasks to evaluate TPGN's performance.\n\nS3: This paper is well-written, and the presentation of the figures and tables is clear, making it easy to understand and follow. The experimental comparisons are comprehensive, including numerous advanced baseline models such as iTransformer, ModernTCN, FITS, TimeMixer, PDF, WITRAN, and Basisformer.\n\nW1. For tables with a large amount of content, such as Table 1, it may be beneficial to consider using different colors for highlighting, as it could enhance clarity. Additionally, another option to consider is moving some of the experimental results to an appendix.\n\nW2. While TPGN exhibits some advantages in terms of efficiency, I have noticed that it still appears to be challenging to reach the optimal level. Specifically, I have noticed that as the input sequence size increases, the efficiency of TPGN may gradually become inferior to that of iTransformer.\n\nQ1. Why was the Gated Mechanism designed in PGN this way in Figure 2 (a)? Can this part be replaced with GRU or other RNN variants?\n\nQ2. Is it necessary to have two Linear layers in the design of the short-term branch in TPGN?" }, { "confidence": 4, "rating": 5, "review_id": "iOm5tJy5Iz", "review_text": "The paper introduces a new model paradigm which aims to solve the traditional bottlenecks of RNN models, such as non-parallel computation, gradient explosion/vanishing issues, etc.\n\n1. An important problem is studied in this paper.\n2. The overall representation is clear and easy to follow.\n3. A comprehensive summary of the related work is provided.\n\n1. The overall contribution is not very significant.\n2. Some questions regarding the time complexity and experiments need to be clarified.\n\nThe model proposed in this paper is pretty neat and easy to follow. My questions mainly focus on the time complexity and experiments:\n1. I think the total amount of computation done in the PGN should be O(L^2) because it is 1 + 2 + … + L, which is O(L^2). Although the PGN removes half of the computations of the self-attention, the self-attention and PGN still share the same asymptotic complexity. Thus, technically, replacing the PGN with self-attention won’t change the time complexity asymptotically. But I agree that it should be faster than RNN since it enables parallel computation as self-attention does. Also, this is the reason why I think the overall contribution is less exciting than the paper title claims. Can the authors kindly address this issue?\n2. The above discussion can also be tested in the Efficiency of Execution experiment.\n3. Regarding the experiments, only an input length of 168 is tested. Why did the authors choose to fixate on this input length instead of testing some other options?\n4. In the ablation test, it seems that on some datasets (e.g., ETTh1), TPGN’s improvement is very slight compared to its LSTM/GRU/MLP ablations. Can the authors provide some analysis on these cases?\n5. I am also interested to see some analysis on what would happen if the PGN is replaced by self-attention." }, { "confidence": 5, "rating": 7, "review_id": "jpzTU4OIxe", "review_text": "This paper proposes a new network called PGN to capture the long-term dependencies of time series. Based on PGN, this paper further design TPGN for long-range time series forecasting. TPGN consists of two branches to respectively capture the long-term periodic patterns and short-term information of time series. Extensive experiments are conducted to show the effectiveness and efficiency of the TPGN.\n\nS1. This paper is easy to follow. The motivations are clearly described by figures. The authors thoroughly analyze the information propagation modes of different time series forecasting models and explore new information propagation path to improve TS forecasting effectiveness.\n\nS2. The design of the PGN is novel, which is a completely new network architecture and can effectively solve the inherent problems of classic RNN models. Both experimental results and theoretical analysis show the effectiveness and efficiency of PGN.\n\nS3. This paper proposes TPGN upon PGN, which capture both the long-term and short-term characteristics of the time series with low computational complexity.\n\nS4. Experiments are sufficient. Five benchmark datasets are evaluated and the most representative models proposed recently are included in the experiments.\n\nW1. The computational complexity of TPGN is not well discussed in this paper, and it would be better if the inference efficiency was adequately discussed as the time series size increases.\n\nW2. Some presentation needs to be improved. For example, it is difficult for readers to quickly get important conclusions on Table 1 and Table 4.\n\nIn table of experiment comparison, could you explain why TPGN-long outperform TPGN in some cases." } ]
ynJr0RW6FR
ReGS: Reference-based Controllable Scene Stylization with Gaussian Splatting
Referenced-based scene stylization that edits the appearance based on a content-aligned reference image is an emerging research area. Starting with a pretrained neural radiance field (NeRF), existing methods typically learn a novel appearance that matches the given style. Despite their effectiveness, they inherently suffer from time-consuming volume rendering, and thus are impractical for many real-time applications. In this work, we propose ReGS, which adapts 3D Gaussian Splatting (3DGS) for reference-based stylization to enable real-time stylized view synthesis. Editing the appearance of a pretrained 3DGS is challenging as it uses discrete Gaussians as 3D representation, which tightly bind appearance with geometry. Simply optimizing the appearance as prior methods do is often insufficient for modeling continuous textures in the given reference image. To address this challenge, we propose a novel texture-guided control mechanism that adaptively adjusts local responsible Gaussians to a new geometric arrangement, serving for desired texture details. The proposed process is guided by texture clues for effective appearance editing, and regularized by scene depth for preserving original geometric structure. With these novel designs, we show ReGs can produce state-of-the-art stylization results that respect the reference texture while embracing real-time rendering speed for free-view navigation.
https://openreview.net/pdf/ff2c2a64aeb6fea451908d363d55da1992fca363.pdf
[ { "confidence": 3, "rating": 5, "review_id": "ku0BsdVDrY", "review_text": "This paper presents a method for stylizing 3D Gaussian Splatting (3DGS) using a single reference image. Unlike NeRF, which uses a structured representation, 3DGS is an unstructured discrete representation that tightly binds geometry and appearance to each Gaussian splat. To address this challenge, the paper introduces a texture-guided control mechanism, which differs from the position-guided approach used in the original 3DGS paper. This new mechanism effectively edits the appearance of a pretrained 3DGS to match the detailed texture from the reference image while preserving the original geometric structure.\n\n+ The main novelty of this work lies in the Gaussian splitting strategy, which is based on the color gradients of all Gaussians over iterations, rather than the positional gradients used in the original 3GDS approach. I find this approach to be quite neat and well-suited for the task.\n+ The ablation study demonstrates the benefits of using this approach, including a reduction in the number of Gaussians needed to model the details of the reference texture.\n\n+ The novelty of the method seems somewhat limited, as it is largely based on Ref-NPR to enable image-reference-guided stylization.\n+ It is unclear how well the method would perform if the geometry is also heavily stylized, rather than just the appearance.\n+ The results (specially the video results) presented are quite limited, e focusing primarily on simple synthetic scenes with white backgrounds, and do not demonstrate the method's effectiveness on more complex scenes.\n\n+ Can this method be applied when the geometry is heavily stylized? Most of the stylization examples seem to focus only on color/appearance.\n+ If possible, could the author share the video results for the Fern and Truck scenes in Figure 6?" }, { "confidence": 4, "rating": 6, "review_id": "AsCLUnt18F", "review_text": "The paper presents an optimization-based approach for style transfer of a (pre-baked) 3D scene represented by a 3D Gaussian splatting (3DGS). In order to fine-tune the given 3D scene with a style reference image of a single view, the authors suggest using a texture-guided controlling algorithm, which modifies the densification algorithm of the original 3DGS by focusing on the color gradients. The training loss is also modified to include depth-based geometry regularization and additional guidance provided by generated pseudo-views based on the 3D projections of the given style reference onto novel view cameras. The experiments are performed upon the existing public weights of 3DGS, where the method is compared with three NeRF-based methods, ARF, SNeRF, and Ref-NPR.\n\n1. As demonstrated in the supplementary materials and figures in the manuscript, the method seems to work well with the pre-baked 3DGS weights.\n2. Detailed related work section enlightens novice readers to get familiar with the field of style transfer of 3D scenes.\n3. Adequate level of implementational details are provided.\n\n1. Although the topic and the approach presented in the paper seems adequate, the presentation of those can be much better. For example, since the authors have modified the original training algorithm of 3DGS in Section 3.2 of the manuscript, and this seems to be the most significant contribution of this paper, they can use *Algorithmic/Algorithm2E features of LaTeX* or present with *a pseudocode of the densification algorithm* to more clearly present the key differences between theirs and the original 3DGS.\n2. The components of the proposed loss functions, such as the TCM, the pseudo-view loss, the depth regularization, and the color-matching loss, which are originally devised to work with NeRF-based scene representation (Ref-NPR). I do not want to argue with the novelty of this adoption, but I believe that the design decision should be more firmly verified. Even though these losses may be generalizable to 3DGS-based representations as the paper implies, this hidden claim should be re-assessed with each component on the compatibility with the new representation (3DGS). In other words, *ablation studies for these loss functions* can be carried out just like [Figure 6 of Ref-NPR paper](https://ref-npr.github.io/assets/2212.02766.pdf) in order to justify the fitness of the proposed loss function with 3DGS representations.\n3. I understand that an exhaustive quantitative analysis in this topic can be very difficult to design, but comparing the results with only one table seems not promising enough. For example, detailed tables with each test scene, just like [Table B.1 of Ref-NPR](https://ref-npr.github.io/assets/2212.02766.pdf), can be added with more visualization.\n4. The paper could be much better with visualization of *how different style reference images affect a single scene* with the proposed algorithm. For example, Ref-NPR shows results with multiple style inputs acting on a single baked scene.\n\nAs a summary, my key concern is the (1) representation of the materials, the (2) justification of the presented/adopted components (the losses, the densification algorithms), the (3) lack of quantitative comparison table of each scene, and the (4) lack of comparison of the results from different style images.\n\nThe main contribution of the paper I believe is to report the results from applying the training algorithms of Ref-NPR to 3DGS-based representations with proper algorithmic modification to make it suitable for 3DGS. One requires to compare at least all the cases demonstrated in Ref-NPR in order to justify that this training scheme for style transfer is better suited for 3DGSs than NeRFs. Therefore, unless the mentioned points are addressed, I believe this version of the manuscript is not ready for publication in this venue.\n\n1. How were the balancing parameters lambda of equation (8) obtained? Are the values of these hyperparameters critical in achieving high quality results, or are the optimal set of values differ across different stylization tasks? If so, providing the recommendation of choosing these hyperparameters will make the work more applicable.\n2. Since the approach only densifies (and not prunes, may I guess) the Gaussians, the resulting scene should be much heavier than the original. How much are the number of Gaussians change in the shown experiments? How the quantitative scores (Ref-LPIPS etc.) change as the number of Gaussians increases? Is there any recommendation to modulate the size of the stylized Gaussian splatting?\n\nPlease note that these questions are not counted in my overall scoring." }, { "confidence": 4, "rating": 6, "review_id": "jbLXZOVujd", "review_text": "The paper proposes a method to stylize 3D Gaussians using a texture guidance. The method takes a pretrained 3D Gaussian model and one content-aligned reference image as inputs and outputs a stylized 3DGS model which could be rendered at real-time framerate. Several techniques, including structured densification, depth-based geometry regularization and view-consistency constraints are introduced to achieve an effective stylization which performs better than previous state-of-the-art work.\n\n1. The paper is generally well-written and easy to follow.\n2. The insight on color gradients is interesting and works well. The method seems promising for appearance editing on Gaussians.\n3. Both qualitative and quantitative evaluations show noticeable improvement compared to previous work.\n\n1. The methodology seems largely inspired by Ref-NPR, though adapted to fit the 3D Gaussians. Readers may have to read Ref-NPR first in order to understand the motivation behind the design choices, especially in Section 3.4.\n2. The superscript $(x, y)$ in Eq. 5 is not explained.\n3. Minor indentation issues on L154, L188, L198, and L224.\n\n1. As you have mentioned, calculating TCM loss is slow. An ablation may better explain why TCM must be introduced despite its long running time.\n2. Is it possible to use multiple views as texture references?" }, { "confidence": 4, "rating": 7, "review_id": "9FvkdTyQNl", "review_text": "The paper proposed a texture-guided Gaussian densification strategy for exemplar-based 3DGS style transfer with content-aligned reference, while preserving original geometry by depth supervision.\nDuring 3D stylization with a style template reference, the introduced texture-guided Gaussian control strategy can geometrically adaptively densify 3D Gaussians for fine-grained texture optimization. \nRelying on the advanced representation of 3DGS, the stylized scene can achieve real-time rendering of novel views.\n\n1. The paper proposed a decent design of style transfer for a 3DGS scene while preserving geometry by depth supervision.\nThe novel texture-guided control of Gaussian densification assists in optimizing texture with high-frequency details.\nI believe this strategy worths attention beyond 3DGS appearance stylization.\n\n2. The Stylized Pseudo View Supervision works better than other multi-view consistent stylization baselines, in terms of semantic consistent stylization for uncovered areas by reference view. \n\n3. The elaboration of methodology is technically sound, which is possibly reproduced.\n\n4. The experiments and evaluation are convincing with ablation studies and baseline comparisons. And paper experimented on diverse scenes covering objects, forward-facing scenes, an unbounded scene. But I still have some main concerns mentioned in Weaknesses 2.\n\n1. The paper mainly concerns fine appearance optimization by densification and depth supervision.\nFor 3D stylization, geometric stylization and editing could be tried or discussed based on proposed method. For example, stylizatin given an edited view with minor shape changes.\n\n2. The most innovative and inspiring part is the Texture-guided Gaussian Control with texture guidance plus structured desification. However, the experiment part can be further improved:\n\n 2.1. In Appendix C, there an ablation study by comparing original 2 Gaussians and proposed 9 Gaussians densification set. There is no solid and scientific validation for the best selection of the number 9. Please see details in Question 2. \n\n 2.2. There is no ablation study of ablating only texture guidance (i.e. use original position gradients as guidance), or ablating only structured densification (i.e. use original densification scheme). Current Sec 4.2 ablation study of Texture-Guided Control show the joint effect of texture guidance and structured desification, which cannot show the effects come from the joint cooperation or from one dominant strategy. Please see details in Question 3. \n\n3. A minor point and suggestion.\nFor evaluation comparisons, the paper mainly compare with baselines with Plenoxels representation.\nSince ReGS's fast training and rendering capability replies on 3DGS, even ablation studies provide good validataion, I still expect comparisons with baselines with 3DGS, e.g. reproduce 3DGS-version SNeRF.\n \n4. Minor issue in related work section. The paper should stress 2D and 3D stylization involve only image-exemplar based neural style transfer.\nSince this work finishes edited-view guided stylization, methods of text-guided dataset editing for optimization such as Instruct-NeRF2NeRF/Instruct-GS2GS is also a suitable related work.\nThere are also some concurrent work stylizing 3DGS scenes, such as StyleGaussian, StylizedGS, Gaussian Splatting in Style.\n\n1. For texture guidance, the paper selects color gradient as hints for desification, which is a straightforward constraint hint. Is the selection based on trials among all variables such as scales, colors, rotations, opacity, etc.? If yes, what are differences among different gradient hints? \n\n2. In the ablation study of Structured Densification (in Appendix C), I would suggest to conduct an experiment with different numbers of a dense set of Gaussians for each responsible Gaussian to be splitted, varying from original 2 to proposed 9 or even larger number.\nThere is not enough experimental statistics to support the densification strategy via replacing by a denser set of 9 Gaussians, rather than smaller 5, or larger 16 Gaussians.\nIn addition, in Appendix C default setting is based on position-gradient-guided or proposed color-gradient-guided density control?\n\n3. In Texture-guided Gaussian Control, which one between Texture Guidance and Structured Densification is more important? Or only when both jointly work, ReGS can gain better performance than naive densification strategy?\n\n4. I wonder if this Gaussian densification strategy supports original reconstruction and other downstream tasks. \n\nI would like to see more analysis and insights particularly for Questions 1-3 in the discussion phase." }, { "confidence": 4, "rating": 5, "review_id": "5yhNK7mq9w", "review_text": "The paper introduces ReGS, a new reference-based 3D style transfer method that utilizes 3DGs as the 3D representation. To capture fine-grained details from the reference view, the method employs texture-guided Gaussian control to enhance density in areas where texture is under-represented. Additionally, the approach incorporates depth-based regularization and pseudo-view supervision to ensure consistency while stylizing with the reference image. The quantitative and qualitative results demonstrate that ReGS achieves superior stylization, capturing detailed and high-quality effects more effectively than previous methods.\n\n- The paper is well-written and comprehensive, making it easy to follow.\n- The experiments are detailed, and the impact of each proposed method is demonstrated step-by-step.\n- The stylization results effectively capture fine-grained details from the reference image.\n- The proposed appearance-based densification approach is simple yet proves to be effective.\n- The choice of 3D GS for reference-based stylization results in faster rendering performance.\n\n- I do not find the methods presented in the paper to be significantly novel, as they give the impression of being a 3DGS-adapted version of Ref-NPR. While I acknowledge the differences and novelties introduced to effectively adapt reference-based stylization to the 3D-GS setting, I do not see a critical distinction in terms of the 'style transfer technique' itself, once the modifications specific to the 3D-GS settings are set aside. This is primarily because the stylization pipeline (Section 3.4) closely mirrors that of Ref-NPR, without introducing new improvements or modifications.\n- The qualitative comparison presented in Figure 6 appears unfair. As I understand, ARF and SNeRF in this experiment are stylized using a stylized reference view, and the discrepancies between these results and the reference view are emphasized. However, the primary objectives of ARF and SNeRF differ from those of Ref-NPR and ReGS, as they are not specifically designed for reference-based stylization. Consequently, there is no inherent need for their stylization results to strictly adhere to the reference view. I believe the authors are aware of this distinction. For a fairer comparison, it would be more appropriate for the authors to include the original 2D style image for ARF and SNeRF and conduct a qualitative assessment based on aesthetic quality. Comparisons of the ability to replicate high-frequency details and correspondence should perhaps be reserved exclusively for comparisons with Ref-NPR.\n\nPlease see the weaknesses above." } ]
yltJAlwtW9
Information-theoretic Generalization Analysis for Expected Calibration Error
While the expected calibration error (ECE), which employs binning, is widely adopted to evaluate the calibration performance of machine learning models, theoretical understanding of its estimation bias is limited. In this paper, we present the first comprehensive analysis of the estimation bias in the two common binning strategies, uniform mass and uniform width binning. Our analysis establishes upper bounds on the bias, achieving an improved convergence rate. Moreover, our bounds reveal, for the first time, the optimal number of bins to minimize the estimation bias. We further extend our bias analysis to generalization error analysis based on the information-theoretic approach, deriving upper bounds that enable the numerical evaluation of how small the ECE is for unknown data. Experiments using deep learning models show that our bounds are nonvacuous thanks to this information-theoretic generalization analysis approach.
https://openreview.net/pdf/c4a25ce5ac23050bc6d164b9b2269d2890c8bde3.pdf
[ { "confidence": 4, "rating": 5, "review_id": "8FGV8j9ZaL", "review_text": "This paper analyzes the estimation bias and generalization error of the expected calibration error (ECE). Specifically, in a binary classification setting, the authors provide an upper bound for the total bias with an improved convergence rate, applicable to both uniform mass and uniform width binning strategies. They also determine the optimal number of bins to minimize the total bias. Furthermore, the authors utilize the information-theoretic generalization framework, particularly the Conditional Mutual Information (CMI) framework, to characterize the generalization of ECE.\n\n1. This paper achieves a tighter bound for total bias compared to previous works.\n\n2. The optimal number of bins is determined using the upper bound of the total bias.\n\n1. As the authors themselves note, a significant limitation is that the analysis in this work is only applicable to binary classification.\n\n2. Some assumptions (e.g., Assumption 2) are not well justified.\n\n3. The writing has significant room for improvement; several arguments are unclear or misleading.\n\nPlease find more details in the questions below.\n\n1. Does Assumption 2 hold true in practice? Is there a way to verify it? Additionally, what is the motivation behind assuming $n_e\\geq 2B$? How is this assumption utilized? If $n_{te}\\leq 2B$, will Theorems 2 and 3 still be valid?\n\n2. In the proof sketch of Theorem 2, you mention that $\\mathrm{ECE}$ and $\\mathrm{TCE}$ could be re-written. While they seem correct to me, could you elaborate on how $\\mathrm{TCE}(f_\\mathcal{I})$ is obtained in the form shown in Line 152? I did not find the details in the complete proof.\n\n3. According to Theorem 5, do the upper bounds indicate that UWB is a better binning strategy than UMB, given that UMB has an additional $\\mathrm{fCMI}$ bias term? It seems that only UMB's expected binning bias is sensitive to the training data, which might be seen as a disadvantage in terms of the upper bound.\n\n4. The writing can be significantly improved. For example, in Line 244, you mention \"Our theory might guarantee the ECE under test dataset for them.\" Do you mean your theory might guarantee low ECE under the test dataset? Additionally, in Lines 251-252, \"This implies that if the model generalizes well, evaluating the ECE using the training dataset may better reduce the total bias than that using test dataset.\" Why does evaluating ECE reduce total bias? What we really care about is ECE on unseen/test data. How does evaluating ECE on training data affect this purpose?\n\n5. Why is the metric entropy method only used for UWB? It seems that you upper bound $\\mathrm{eCMI}$ by the $\\mathrm{fCMI}$ term first in your proof. What prevents you from giving a similar result for UMB?\n\n6. In Line 339-340, you mention that \"a notable trend towards acquiring relatively stable nonvacuous bounds can be observed when adopting $B =\\lfloor n^{1/3} \\rfloor$\", but according to Figure 1, it seems $B=52$ is tighter than $B=B =\\lfloor n^{1/3} \\rfloor$ in most cases. Could you clarify this?\n\n7. Since $\\mathrm{eCMI}$ and $\\mathrm{fCMI}$ terms are key components in both standard generalization error and expected total bias of calibration error, do you have any new insights into the relationship between calibration and generalization from this perspective?" }, { "confidence": 4, "rating": 6, "review_id": "iFB55rwdoJ", "review_text": "This paper investigates the estimation bias in expected calibration error (ECE) for binary classification models, focusing on uniform mass binning (UMB) and uniform width binning (UWB). The authors present a comprehensive theoretical analysis, establishing upper bounds for the bias and the generalization error. Based on the convergence rates of binning and statistical bias, they identify the optimal number of bins to minimize the total estimation bias.\n\n* The paper provides a comprehensive analysis of the estimation bias in ECE, providing upper bounds and optimal bin size choices for both UWB and UMB.\n* The authors further derive upper bounds for the generalization error between ECE and TCE using an information-theoretic approach.\n* Numerical experiments on deep learning tasks confirm that the derived bounds are non-vacuous.\n\n* The provided results only apply to binary classification, and require Lipschitz continuity which may not be necessarily satisfied in deep learning models. Also, these bounds are analyzing the ECE using test data but not training data, making them less applicable since test data are not always available in practice.\n* The convergence rates of the information-theoretic generalization bounds heavily depend on the actual rate of eCMI and fCMI measures, which are not directly clear in analysis. In theorem 6, the authors show that eCMI scales as O(log n) based on metric entropy, but this bound involves the dimensionality d, and is thus hardly applicable to deep learning models.\n* For experimental results, only the statistical bias is evaluated but not the total generalization error. It is also hard to see to what extent these bounds are tight in the current results. These bounds are also hard to estimate due to the existence of eCMI or fCMI measures. I would suggest the authors additionally consider some synthetic settings where TCE, eCMI, and fCMI are analytically tractable to show the tightness of the bounds. (maybe Gaussian data points?)\n\nRecent information-theoretic bounds have shown improved rates of O(1/n) under the interpolating regime, and also direct computational tractability with loss CMI or entropy metrics. It may be worth some discussions on whether these techniques can be adopted to acquire tighter bounds.\n\nTighter Information-Theoretic Generalization Bounds from Supersamples. ICML 2023.\n\nRethinking Information-theoretic Generalization: Loss Entropy Induced PAC Bounds. ICLR 2024." }, { "confidence": 4, "rating": 6, "review_id": "sKy1NnYBYB", "review_text": "The paper studies the expected calibration error using information-theoretical tools. They derive different tight fCMI and eCMI bounds in this setting. Empirical results show that the results are nonvacuous.\n\n1/ The paper is in general well written. Adequate discussions are given in the main body of the paper and the appendices.\n\n2/ The paper provides the first information-theoretic comprehensive analysis of the bias associated with the ECE when using the test and training datasets.\n\n3/ The theoretical results seem sound. I skimmed through most of the proofs (I did not go through all of them in detail) but the proofs are well-structured and easy to follow. \n\n3/ Empirical results show that the bound is tight for deep learning models.\n\nThe only weakness, if any, is perhaps that the paper uses conventional machinery for deriving information-theoretic generalization bounds and that it has not developed novel proof techniques.\n\nBesides fCMI and eCMI based bounds, is it possible to extend the analysis and derive $\\Delta$-L based bounds [37]? These bounds are typically tighter compared to fCMI and eCMI." }, { "confidence": 4, "rating": 7, "review_id": "kqZ1EtNadg", "review_text": "This paper presents a comprehensive analysis of the estimation bias for expected calibration error (ECE), focusing on two common binning strategies: uniform mass and uniform width binning. The analysis establishes upper bounds on the bias, resulting in an improved convergence rate. Furthermore, these bounds reveal the optimal number of bins needed to minimize the estimation bias. The study also extends the bias analysis to generalization error analysis using an information-theoretic approach, deriving upper bounds that facilitate numerical evaluation for recalibration methods based on training data. Experiments with deep learning models demonstrate that the bounds are nonvacuous, due to the information-theoretic generalization analysis approach.\n\nAs the author pointed out, the existing literature lacks a theoretical analysis of the estimated ECE and a more principled approach to estimation. This paper addresses and closes this gap.\n\n1.\tTightness issue of the upper bound in Corollary 1. It is commendable that the authors included a discussion on the tightness of Equation 12. However, it would be more rigorous to formally establish a minimax lower bound for the estimation bias that applies to all types of estimators. The authors could either use existing results from Tsybakov [33] or construct a worst-case analysis using Le Cam’s method to establish the lower bound. While it is acceptable if the constant does not match the upper bound, it is crucial to demonstrate the rate.\n\n2.\tA drawback of information-theoretic (IT) bounds is the implicit dependency on the algorithm. For example, Theorem 7 appears very similar to Theorem 4, as the recalibration-induced dependence is encapsulated in the CMI term. The authors should provide more commentary on this aspect and clarify the connection between Theorems 6 and 5, as well as which bound is more practical for use.\n\n3.\tIn the caption of Figure 1, It is said that the ECE gap does not change significantly in B. How can we justify that the selection of $B = n^{1/3}$ is better? Figure 1 primarily plots the bound in (14), but as I mentioned earlier, such a bound can be very loose, and more empirical justification should be provided for the selection of optimal B.\n\n1.\tClarification: what is the ECE gap plotted in Figure 1 and Table 1? Estimated ECE? To my understanding, all the bounds in Figure 1 are quite loose. Shouldn't we plot the left-hand side of Equation 14 for a more accurate comparison?\n\n2.\tHow should the error bars for the bound values in Table 1 be interpreted? What is the source of the randomness?\n\n3.\tIt is not accurate to say that I(S;W)=O(log n) in Line 258, as such Barron’s result assume that samples Z are conditional i.i.d. given model parameter w. However, in the learning context, we always assume that training samples are i.i.d." } ]
ylceJ2xIw5
Fair Wasserstein Coresets
Data distillation and coresets have emerged as popular approaches to generate a smaller representative set of samples for downstream learning tasks to handle large-scale datasets. At the same time, machine learning is being increasingly applied to decision-making processes at a societal level, making it imperative for modelers to address inherent biases towards subgroups present in the data. While current approaches focus on creating fair synthetic representative samples by optimizing local properties relative to the original samples, their impact on downstream learning processes has yet to be explored. In this work, we present fair Wasserstein coresets ($\texttt{FWC}$), a novel coreset approach which generates fair synthetic representative samples along with sample-level weights to be used in downstream learning tasks. $\texttt{FWC}$ uses an efficient majority minimization algorithm to minimize the Wasserstein distance between the original dataset and the weighted synthetic samples while enforcing demographic parity. We show that an unconstrained version of $\texttt{FWC}$ is equivalent to Lloyd's algorithm for k-medians and k-means clustering. Experiments conducted on both synthetic and real datasets show that $\texttt{FWC}$: (i) achieves a competitive fairness-performance tradeoff in downstream models compared to existing approaches, (ii) improves downstream fairness when added to the existing training data and (iii) can be used to reduce biases in predictions from large language models (GPT-3.5 and GPT-4).
https://openreview.net/pdf/93985e308e0356a2b95c8e021f79d007aeda2429.pdf
[ { "confidence": 3, "rating": 6, "review_id": "2LfzT0oyko", "review_text": "This paper introduces a new data distillation technique called Fair Wasserstein Coresets. The general idea is to create a synthetic core set along with sample weights to represent a larger dataset, by minimizing the Wasserstein distance between core set and dataset, while ensuring a fairness constraint is satisfied. The paper develops a majority minimization algorithm for this Wasserstein problem and empirically validates it on several data sets demonstrating a competitive fairness utility trade-off.\n\n- The Wasserstein problem is well-formulated with theoretical guarantees.\n- The connections with k-medoids is intuitive.\n\n- I suspect there is a potential error in Proposition 2.1, specifically pertaining to the inputs and outputs defined in these functions. Note that z consists of inputs ($d, x$) and outputs ($y$) of the NN, whereas $g_{\\psi}$ is an MLP, i.e., it is a function that takes only $(d, x)$ as input. From [69 (original reference)], the MLP satisfies the Wasserstein inequality but only on the marginal distributions over p_{(x,d)} rather than over $p_{Z}$. This may be resolved if we consider not the Wasserstein distance of the MLP output, but instead the Wasserstein distance of the function $h(z) = | g_{\\psi}(x,d) - y |$.\n- What do you mean in Lemma 3.1 that the corset is “no better than the best fair Wasserstein corset formed by $m |D||Y|$ data points”? I suspect you mean better with regard to achieving a lower Wasserstein distance, but please clarify.\n- The empirical analysis in Figure 1 is hard to parse. Can you measure the Pareto frontier from all of the observations and demonstrate that FWC is dominant? FWC seems Pareto efficient for Adult, Crime, and Drug, but not Credit potentially — but it is hard to see.\n- It is hard to understand the trade-offs between accuracy and disparity in the LLM experiments in Table 1, just by reporting these numbers. How important is it that the disparity dropped by 0.009 at a 2.97 point loss in accuracy? Again, it would be important to demonstrate some Pareto efficiency. Furthermore, the change in accuracy and disparity do not seem statistically significant based on the SD reported.\n\n- Please clarify the potential error and comment about Proposition 2.1. If there is an error, what are the consequences with subsequent theoretical results?" }, { "confidence": 3, "rating": 6, "review_id": "QgsldCNLHJ", "review_text": "The paper gives an algorithm to generate smaller weighted synthetic dataset from real data set such that the synthetic data can enforce demographic parity when used for downstream tasks. This is achieved by solving an optimization problem of minimizing the Wasserstein distance between the two dataset distributions along with demographic parity-based fairness constraint. The authors describe how to efficiently solve this problem by reformulating it and subsequently using a majority minimization algorithm to solve the resultant nonconvex problem. They provide convergence guarantees for the algorithm and also generalization bounds for the solution. The theoretical results are supported by experiments on real and synthetic datasets.\n\n1) The paper for the most part is written clearly with some minor writing issues (see weaknesses). It is well structured and not too difficult to follow the high-level ideas. Both fairness and scalability are relevant issues so the paper will be of interest to the community. \n\n2) I could not check all proofs, but the theoretical results appear sound. The connection between the unconstrained problem and Lloyd's algorithm for $k$-means is neat. \n\n3) The authors have performed experiments on both real and synthetic datasets and compared with a number of existing methods. As such the paper is a good mix of theory and practice.\n\n1) The paper seems to borrow a lot of ideas and proof techniques from existing works like [56], [71] and others. for e.g. the reformulation, ideas to speed up the algorithm etc. As such I am not entirely sure about the novelty quotient of the work. It would be better if the authors can highlight why the modifications to techniques from existing works are non-trivial. \n\n2) The explainability of the synthetic data will be very less. Specifically, as far as I understood, the authors are assigning the output label and sensitive attribute value to the generated data points just in same proportion as that in the original data. It is not clear to me what does this mean for the individual synthetic data points. Also do the features in the generated synthetic data correspond exactly to the features in original data?\n\n3) I suggest the paper be proofread for minor corrections in writing: E.g.: In the contributions make the 'w' 's capitalized. On line 171 the authors say $P \\geq 0$ (which I think means each entry in non-negative) while on line 258 it is $P \\geq \\mathbf{0}$. Maintain the consistency. \n\n4) Should not the weight of the synthetic data sum to $n$ and not $m$ (line 141 $\\Delta_m$)? Typically, we try to preserve the weight of the original data in expectation while reweighing the sampled points. Please Clarify.\n\nSee weaknesses" }, { "confidence": 4, "rating": 7, "review_id": "nRPlBTOsFW", "review_text": "This paper proposed to extract coresets from a set of data samples using Wasserstein distance with fairness constraints. The authors formulates this problem as a minimization with linear constraints. The coreset selection is over the whole input space, not just from original data samples. The importance / weight of each coreset sample are also optimized. Extensive experiments show this method achieves better fairness-utility tradeoff, and can be applied in LLMs to reduce bias.\n\nThe paper is nicely written and easy to follow. I appreciate the detailed steps and neat reformulations of the optimization problem. Theoretical guarantees are provided. The experiments supports the effectiveness of the proposed method very well.\n\n1. In section 4.2 (line 203), how can problem Eq.(12) be separated into subproblems as in Eq.(13), are the optimal solutions of all subproblems the same and equal to the solution to (12)? \n2. In section 6 (line 259), why are the minimizers of problem (17) always has only one non-zero entry in each row? As problem (17) can be seen as a relaxed version of discrete Kantorovich problem, where we can't say anything about the sparsity of the optimal plan. Please elaborate.\n\nSee Weaknesses." }, { "confidence": 4, "rating": 6, "review_id": "RMXG27sSe5", "review_text": "This paper talks about \"fair Wasserstein coresets\", weighted representative points generated to represent the original datasets. The goal is to meet two purposes: 1) the Wasserstein distance of the coreset and the input data set is minimized, 2) fairness in terms of demographic parity. Having a small Wasserstein distance can help to bound the downstream discrepancy for ReLu activated perceptrons.\n\nThe authors formulate the problem as an optimization problem (4). \nThere are four steps:\n\t1. Manually set the proportion of each combination of decision (Y) and feature (D). \n\t2. Formulate linear constraints for the fairness constraint. This one borrows directly from [71].\n\t3. Formulate the Wasserstein distance optimization by using [56]\n\t4. Simply further\n\nAfter that, the problem is not convex. The authors use \"majority minimization\" [52, 38] to solve it. Specifically, one defines a convex surrogate function that upper bounds the non-convex function, and optimizes the convex function. \n\nSection 5 reports theoretical guarantees: running time of the algorithm, convergence guarantee for the surrogate function, and last bound the generalization guarantees. \n\nExperiments are reported in the last section: e.g., improving fairness in LLM. \n\nOn the positive side, the problem formulation is interesting and valid, Wasserstein coreset with fairness consideration. The use of this coreset for downstream applications make sense. Thus the problem and solution have merit. Experiment are thorough. \n\nThe weaknesses (or limitation in significance) is that both crucial steps (2) and (3) are basically using prior work. The theoretical results are standard. \n\nSummarizing I feel that the paper is OK but would give a weak accept.\n\nOn the positive side, the problem formulation is interesting and valid, Wasserstein coreset with fairness consideration. The use of this coreset for downstream applications make sense. Thus the problem and solution have merit. Experiment are thorough.\n\nThe weaknesses (or limitation in significance) is that both crucial steps (2) and (3) are basically using prior work. The theoretical results are standard.\n\nI understand that there are many different notions of fairness and the authors focus on one of them, demographic parity. This is OK. One suggestion is that if the authors can provide some discussions and insights on how the results may improve other notions of fairness it will be valuable to have." } ]
yktQNqtepd
Towards Flexible 3D Perception: Object-Centric Occupancy Completion Augments 3D Object Detection
While 3D object bounding box (bbox) representation has been widely used in autonomous driving perception, it lacks the ability to capture the precise details of an object's intrinsic geometry. Recently, occupancy has emerged as a promising alternative for 3D scene perception. However, constructing a high-resolution occupancy map remains infeasible for large scenes due to computational constraints. Recognizing that foreground objects only occupy a small portion of the scene, we introduce object-centric occupancy as a supplement to object bboxes. This representation not only provides intricate details for detected objects but also enables higher voxel resolution in practical applications. We advance the development of object-centric occupancy perception from both data and algorithm perspectives. On the data side, we construct the first object-centric occupancy dataset from scratch using an automated pipeline. From the algorithmic standpoint, we introduce a novel object-centric occupancy completion network equipped with an implicit shape decoder that manages dynamic-size occupancy generation. This network accurately predicts the complete object-centric occupancy volume for inaccurate object proposals by leveraging temporal information from long sequences. Our method demonstrates robust performance in completing object shapes under noisy detection and tracking conditions. Additionally, we show that our occupancy features significantly enhance the detection results of state-of-the-art 3D object detectors, especially for incomplete or distant objects in the Waymo Open Dataset.
https://openreview.net/pdf/7caaf2ac1f758304a70b57129814d809e45dc1b5.pdf
[ { "confidence": 4, "rating": 6, "review_id": "tM0Xllst5S", "review_text": "This paper presents a new task named object-centric occupancy completion as a fine-grained object representation to supplement the coarse-grained 3D bounding boxes. To accomplish this task, a new dataset, which annotates instance-level high-resolution occupancy, is created in an automated pipeline. This paper also introduces an implicit shape decoder to fuse multi-frame information, predict instance occupancy and refine 3D bounding boxes. Experiments on Waymo datasets above several baselines demonstrate the effectiveness of the proposed method on both occupancy prediction and 3D detection.\n\n1.\tThis paper is well-written and organized.\n\n2.\tA novel task, occupancy augments 3D object detection, and a corresponding new instance-level occupancy datasets is proposed.\n\n3.\tA implicit shape decoder is proposed and achieves great improvements both in occupancy and 3D detection.\n\n1.\tThe motivation of this paper does not seem to be very reasonable. The authors claim that a. high-resolution scene-level occupancy is constrained by computational cost and foreground objects is more import, and b. 3D detection is too coarse to capture the object geometry information. So why not just predict foreground instance-level occupancy in the whole scene, instead of pursuing higher detection accuracy by using the occupancy results? \n2.\tTime and memory cost bought by the proposed shape decoder are not provided. The paper is trying to make a trade-off between occupancy and detection in fine-/coarse-grained level and computational cost level. But the authors only report the occupancy and detection accuracy.\n3.\tSome methods, like VoxelNeXt, FSDv2, HEDNet are missing and are not compared in Table 1.\n4.\tTypos/mis-leading descriptions. For example, ‘Tab. 5.4’ on line 351 -> ‘Tab. 3’.\n\n1.\tWhy not just predict foreground instance-level occupancy in the whole scene, instead of pursuing higher detection accuracy by using the occupancy results? (the same as weakness 1)\n2.\tCould you provide the computational cost of your method or the proposed module?" }, { "confidence": 5, "rating": 6, "review_id": "rHmGQeubbW", "review_text": "In this work, the authors propose a novel task called object-centric occupancy.\nIt extends the 3D detected bounding box representation to provide a more detailed description of the internal object shape.\nThe method provides higher voxel resolution in large scenes by focusing on foreground objects only. \nIt not only achieves state-of-the-art performance on shape completion but can also help refine the object detection tasks on the Waymo Open Dataset (WOD).\n\n- The motion of the proposed task is clear, and the task itself shows good potential in scene understanding. It can enhance 3D detection results even at a far distance.\n- The extensive ablation studies validate each contribution. Various detector results with different settings help prove the robustness of the proposed methods.\nUsing implicit representation from a 3D reconstruction task to complete shapes is neat and interesting. It will be interesting to see how this work can be applied to popular 3D Gaussian representation.\n\n- The experimental results are only obtained on the Waymo Open Dataset. It will be nicer to conduct the experiments on nuScenes or Argoverse 2 to validate its robustness for different datasets.\n- Although the authors say it is a new task, so there are no learning baselines for shape completion, it will be interesting to compare the results with other scene occupancy methods. So that we can see the flaws of using coarse resolution quantitatively.\n\n- The extrapolated results of shape completion are interesting, showing that it can achieve a performance similar to that of using GT boxes. Will it also help with 3D Object Detection results?" }, { "confidence": 4, "rating": 6, "review_id": "l6wZZvh4DB", "review_text": "The manuscript introduces the idea of representing the shape of objects at higher fidelity (and independent of) the rest of the scene. This is explored in the context of autonomous vehicles research on 3d car detection and representation. The proposed model regresses a shape code and an updated 3d bounding box from a 3D bounding box tracklet (derived from any other algorithm) and the points included in it. The shape code can be queried for occupancy to produce a full shape representation during inference. The proposed approach is able to infer complete shapes from partial inputs and the updated 3D BBs improve the input 3D BBs.\n\nThe proposed approach is relatively straightforward, effective and well motivated. This makes it reusable for other works and the paper more reproducable.\n\nThe manuscript is well written and the illustrations help convey the message and improve understanding of the written parts. \n\nThe evaluation is comprehensive and the sensitivity studies are well chosen and help motivate architecture and training choices. In particular it is great to see that the addition of the high resolution shape code and updated 3D BB does lead to substantial performance improvements on the 3D BB detection task (especially for far away OBBs). And that the shape code (if given the GT OBB) does produce a high IoU occupancy grid even if the input 3D BBs are subpar (table 1).\n\nThe manuscript's related work section misses out on an existing related field of 3D CAD model retrieval (which also produces complete shapes) and shape regression from RGB (and depth data) in indoor scenes. Relevant related works include: \n - Scan2CAD https://openaccess.thecvf.com/content_CVPR_2019/papers/Avetisyan_Scan2CAD_Learning_CAD_Model_Alignment_in_RGB-D_Scans_CVPR_2019_paper.pdf\n - SLAM++ https://www.doc.ic.ac.uk/~ajd/Publications/salas-moreno_etal_cvpr2013.pdf\n - FroDO https://openaccess.thecvf.com/content_CVPR_2020/papers/Runz_FroDO_From_Detections_to_3D_Objects_CVPR_2020_paper.pdf\n\nI would have wanted to see a few renderings of the shape codes; This would support the claim that the model learns to complete shapes. The appendix has a few but the visualizations are hard to understand without a better renderer. Some kind of shading or edges for the 3D voxels are essential to see any kind of depth and thus shape (Fig 6 and 7). Extracting a mesh using marching cubes at the 0.5 isolevel might also work.\n\nThe model takes in a series of 3D BBs and outputs one updated 3D BB - at which timestamp is this 3D BB output? The latest?" }, { "confidence": 4, "rating": 5, "review_id": "DsvDlEvCNi", "review_text": "This paper addresses the limitations of 3D object bounding box representations in autonomous driving by introducing object-centric occupancy. It uses an implicit shape decoder to manage dynamic-size occupancy generation. The method demonstrates robust performance under noisy conditions, significantly enhancing detection results in the Waymo Open Dataset.\n\n1. The presentation is well-executed, with figures and charts effectively aiding reader comprehension.\n2. The overall performance is impressive, demonstrating significant improvements across multiple baselines.\n\n1. Creating detailed occupancy for each object seems unnecessary. In most downstream tasks in autonomous driving, using bounding boxes (bboxes) is sufficient.\n2. The performance improvement primarily stems from temporal feature fusion, which lacks significant technical innovation.\n3. It is unclear whether the loss on occ heads in Fig. 4 enhances detection performance. The authors should compare detection performance with and without occ heads after obtaining the Shape Emb. Z to determine if occ heads contribute to learning useful features, such as yaw estimation.\n\nSee weaknesses." } ]
ykQnxko1cJ
CemiFace: Center-based Semi-hard Synthetic Face Generation for Face Recognition
Privacy issue is a main concern in developing face recognition techniques. Although synthetic face images can partially mitigate potential legal risks while maintaining effective face recognition (FR) performance, FR models trained by face images synthesized by existing generative approaches frequently suffer from performance degradation problems due to the insufficient discriminative quality of these synthesized samples. In this paper, we systematically investigate what contributes to solid face recognition model training, and reveal that face images with certain degree of similarities to their identity centers show great effectiveness in the performance of trained FR models. Inspired by this, we propose a novel diffusion-based approach (namely **Ce**nter-based Se**mi**-hard Synthetic Face Generation (**CemiFace**) which produces facial samples with various levels of similarity to the subject center, thus allowing to generate face datasets containing effective discriminative samples for training face recognition. Experimental results show that with a modest degree of similarity, training on the generated dataset can produce competitive performance compared to previous generation methods. The code will be available at:https://github.com/szlbiubiubiu/CemiFace
https://openreview.net/pdf/30f78a219d61e45796c28fce873caf8b9bd87ab7.pdf
[ { "confidence": 5, "rating": 9, "review_id": "qceehVvhz0", "review_text": "This paper proposes CemiFace, a novel diffusion-based approach for generating synthetic face images with varying levels of similarity to their identity centers. The authors argue that semi-hard negative samples, those with moderate similarity to the center, are crucial for training effective face recognition models. The core of CemiFace lies in its ability to control the similarity between generated images and the input (identity center) during the diffusion process. This is achieved by injecting a similarity controlling factor condition (m) that regulates the similarity level. The paper presents a comprehensive analysis of the relationship between sample similarity and face recognition performance, showing that semi-hard samples, generated with m close to 0, achieve the best accuracy.\nCemiFace demonstrates significant improvements over previous methods in terms of accuracy, particularly on pose-sensitive datasets. The paper further validates its effectiveness through qualitative visualizations and ablation studies that examine the impact of various factors, including training data, inquiry data, and the similarity controlling factor. Overall, this paper contributes a valuable approach to generating synthetic face datasets for face recognition with enhanced discriminative power. The method shows promise in mitigating privacy concerns associated with collecting and using real-world face data while maintaining robust recognition performance.\n\nDiscovery of the importance of similarity control in synthetic face generation: CemiFace is motivated by the discovery that face images with certain degree of similarities to their identity centers show great effectiveness in the performance of trained FR models. This is an important discovery to the community of synthetic dataset generation. \nUnique use of similarity control: CemiFace introduces a similarity controlling factor (m) within the diffusion process, enabling the generation of faces with varying levels of similarity to the input image. This provides a fine-grained control over the generated data distribution, which is a unique feature compared to existing methods.\nComprehensive analysis of similarity: The authors present a thorough analysis of the impact of different similarity levels on face recognition performance, validating their hypothesis about the importance of semi-hard samples. This analysis provides valuable insights into the relationship between data distribution and model effectiveness.\nRigorous experimental evaluation: The paper conducts comprehensive experiments across various benchmark datasets and data volumes, comparing CemiFace with other state-of-the-art synthetic face generation methods. The ablation studies provide a detailed understanding of the influence of different parameters and factors on the model's performance.\nRobustness of CemiFace: The experiments demonstrate the robustness of CemiFace to different training data, inquiry data, and similarity controlling factors. The method consistently achieves superior results, demonstrating its effectiveness and generalizability.\n\n- An in-depth discussion on why face images with certain similarity is more beneficial as a training dataset for the face recognition model would strengthen the paper. For example, an analysis such as a similarity comparison the the real dataset and checking if the difficulty of the CemiFace synthetic dataset becomes closer to that of the real dataset would be nice. Other analysis that offers insights as to why certain similarity control is important would also be welcome.\n\n- Written in weakness section." }, { "confidence": 4, "rating": 5, "review_id": "ds5tnFKMDG", "review_text": "The paper introduces an approach called CemiFace for generating synthetic face images to enhance face recognition (FR) models. The paper provides the first in-depth analysis of how FR model performance is influenced by samples with varying levels of similarity to the identity center, focusing particularly on center-based semi-hard samples. The authors propose a unique diffusion-based model that can generate face images with different levels of similarity to the identity center. This model can produce infinite center-based semi-hard face images for synthetic face recognition (SFR). The method can be extended to leverage large amounts of unlabeled data for training, providing an advantage over previous methods. Experimental results demonstrate that CemiFace significantly outperforms existing SFR methods, reducing the GAP-to-Real error by half and showcasing promising performance in synthetic face recognition.\n\n- Focusing on center-based semi-hard samples to enhance face recognition performance is a fresh problem formulation that addresses a notable gap in current methodologies.\n\n- The paper provides a solid experimental validation of its proposed approach. The authors investigate factors affecting performance degradation in synthetic face recognition and offer a hypothesis about the importance of mid-level similarity samples.\n\n- The method for determining GtR remains unclear. Justification regarding how the proposed model yields a low GtR is absent. Is this low GtR attributed to the utilization of real inquiry images? If so, what measures guarantee that the synthetic facial images remain uncorrelated with the real facial images? In other words, I have a reservation that the method may not generate \"true\" synthetic data but highly relies on an inquiry image. Therefore, it is reasonable to see why a low GtR is obtained.\n\n- Figure 5 demonstrates that different identities (such as different genders) can be obtained with different m, even with the same input query. There seems to be no way to control the \"number of identities\" generated from this model. If so, how was the supervised loss applied to train a face recognition model? \n\n- How can one ensure high inter-class and large intra-class variations as required for SFR?\n\n- B.3.3. The assertion that high-quality data is not indispensable for achieving markedly accurate facial recognition performance is somewhat counterintuitive and perplexing.\n\n- The method's reproducibility raises concerns, particularly with respect to the training of the model, which lacks clarity. Specifically, the functions F_1 and F_2 in Equations (6) and (7), as well as the role of C_att, are not explicitly defined, and these elements are absent from Figure 3.\n\nThe proposed model generally lacks controllable factors to generate true synthetic face images that favor high inter-class and intra-class variations.\n\nSee above" }, { "confidence": 4, "rating": 5, "review_id": "jKp0cRqWHV", "review_text": "The paper proposes a novel approach named C to address privacy concerns in face recognition technology. The authors propose CemiFace, a diffusion-based method that generates synthetic face images with controlled similarity to a subject's identity center, enhancing the discriminative quality of the samples. This approach allows for the creation of diverse and effective datasets for training face recognition models without the need for large-scale real face images, thus mitigating privacy risks. CemiFace outperforms existing synthetic face recognition methods, significantly reducing the performance gap compared to models trained on real datasets. The paper also discusses the potential limitations and privacy implications of the approach, highlighting the need for ethical considerations in synthetic face generation for face recognition applications.\n\n1. The use of a diffusion-based model for generating semi-hard samples is an innovative approach that has not been extensively explored in the field of face recognition.\n2. The approach can be extended to use unlabeled data for training, which is an advantage over previous methods that often require some form of supervision.\n\n1. The paper is not well organized. This paper should be reorganized to make it easier for the reader to understand the contributions and technical details of this paper.\n2. Eq. 10 seems to be inconsistant to its description. According to the description, it is highly related to the time step.\n3. Fig. 3 is hard to understand. The training losses are not illustrated in the figure.\n4. Despite aiming to reduce privacy issues, CemiFace still uses a pre-trained model that could have been derived from datasets without user consent, raising ethical and privacy concerns.\n\nRefer to weakness" }, { "confidence": 4, "rating": 5, "review_id": "qRCLWt4fby", "review_text": "The paper titled \"CemiFace: Center-based Semi-hard Synthetic Face Generation for Face Recognition\" addresses a critical issue in face recognition (FR) related to privacy and performance degradation when using synthetic face images. The authors propose a diffusion-based approach, CemiFace, which generates facial samples with varying levels of similarity to an identity center. This method aims to enhance the discriminative quality of synthetic samples, thereby improving the performance of FR models trained on these datasets.\n\nIntroducing Similarity controlling factor in synthetic face generation using a diffusion based approach.\n\n(a) Due to the introduction of this similarity control conditioning in the diffusion process there must be a change in total sampling time ( certainly it will also depend on the number of time steps considered in the diffusion process also) – An illustration/analysis on computational complexity of the proposed algorithm is needed.\n\n(b) Seems like the overall process is dependent on how(using which method) the value of m was determined during the diffusion process! \n\n(c) A complete pseudo-code on the proposed method would have helped the reader to understand the whole process. \n\n(d) Figure 3 could have been much more elaborated and in more details.\n\nWhile comparing the proposed work with other SOTA methods - how did you generate the results of the SOTA method?" }, { "confidence": 4, "rating": 5, "review_id": "1XheAy5c8A", "review_text": "The paper proposes a new Face Recognition diffusion-based generation method. The diffusion process is completed with a semi-hard constraint on the synthetic reconstructed image: for each inquiry image of the (real) training set, the reconstructed image after the forward-backward diffusion process must have a specific cosine similarity with the inquiry image.\nAs it is usual for such methods in Face Recognition, the resulting synthetic dataset is then used for training a Face Recognition model. This model is evaluated across diverse real datasets.\n\nThe tackled problem is quite hard and needed at the same time. Current SOTA Face Recognition generation methods lead to a significant gap in terms of performance, compared to real Face Recognition datasets (of the same size).\nThe idea of controlling the similarity to design semi-hard samples is also interesting.\n\n1) In Fig. 1, is the displayed similarity really the cosine similarity ? In the generated samples, the line with perfect similarity (equal to 1) seems to provide synthetic images which would not have a perfect similarity with the inquiry images displayed above the hypersphere. \n\n2) [minor] In Eq. 3, the probability distribution of epsilon is not specified.\n\n3) The authors should cite explicitly the works that use the training loss (Eq. 2) in this precise form, as there are alternative loss functions for diffusion models. A discussion on the reasons of this particular choice of diffusion loss might be a plus (e.g. in the appendix).\n\n4) [minor] Although the lines 114-116 are accurate, they are misleading the reader. The widely known representation of Face Recognition embeddings is that they lie onto a hypersphere of dimension N, where each embedding is a point of the hypersphere. Those embeddings are clustered by identity on this sphere and the identity centers are roughly at the center of those clusters. The hypersphere mentionned in this paper is a hypersphere of dimension N-1, where the identity center is at the center of the sphere. \n\n5) In lines 120-125, the authors should detail the range of similarities to the identity center, for each of the 5 splits of the CASIA training set. Only the average similarity of each split is specified.\n\n6) [minor] Figure 3 should be a bit more explained than just its caption.\n\n7) [major] Lines 155-162 are not well written and it is hard to understand how the margin m is used to guide the diffusion process. In particular, F_1 and F_2 are not defined, while some unused F is mentionned. C_sim seems to be a vector of unknown size. Also, the temporal guidance is too briefly described.\n\n8) [minor] Some hyperparameters' values (alpha_t/beta_t, lambda) are not specified.\n\n9) The right part of Fig. 4 displays two curves that do not have the same meaning for the x-axis. For AVG, the similarity is a constrained similarity (m) for training CemiFace (i.e. a similarity between a real inquiry image and a synthetic image). For CASIA, it is the similarity between one real image and its identity center (not a real image). To sum up, for AVG it is a similarity between 2 images, while for CASIA it is between 1 image and its identity center. Thus, comparing the two curves does not seem meaningful.\n\n10) [major] The CosFace loss is used to train on synthetic datasets, while AdaFace is used to produce (identity-oriented) embeddings for the CemiFace training set generation. There should be only one model for both tasks, for fair comparisons. On Table 6, training on CASIA with AdaFace gives better results than with CosFace, so one could attribute the good performance of CemiFace to the fact that the authors used a stronger model (AdaFace) to generate the synthetic dataset than the model used to train on this dataset (CosFace). In addition, there should be a part studying the impact of this AdaFace choice (i.e. another loss), at least in the appendix.\n\n11) The ROC curve on IJB-B/IJB-C for all synthetic methods of Table 6 would be a plus, as the accuracy is easily saturated, and not really used in industrial use-cases. Previous papers (related works) provide such ROC plots.\n\n1) Could you explain the last sentence of Section 4.2.1 (lines 260-261) ?\n\n2) In Section 4.2.2, why is the range of training m equal to [0,1] while the previous subsection concludes with an optimal range [-1,1] ?" } ]
ykACV1IhjD
Controlling Continuous Relaxation for Combinatorial Optimization
Unsupervised learning (UL)-based solvers for combinatorial optimization (CO) train a neural network that generates a soft solution by directly optimizing the CO objective using a continuous relaxation strategy. These solvers offer several advantages over traditional methods and other learning-based methods, particularly for large-scale CO problems. However, UL-based solvers face two practical issues: (I) an optimization issue, where UL-based solvers are easily trapped at local optima, and (II) a rounding issue, where UL-based solvers require artificial post-learning rounding from the continuous space back to the original discrete space, undermining the robustness of the results. This study proposes a Continuous Relaxation Annealing (CRA) strategy, an effective rounding-free learning method for UL-based solvers. CRA introduces a penalty term that dynamically shifts from prioritizing continuous solutions, effectively smoothing the non-convexity of the objective function, to enforcing discreteness, eliminating artificial rounding. Experimental results demonstrate that CRA significantly enhances the performance of UL-based solvers, outperforming existing UL-based solvers and greedy algorithms in complex CO problems. Additionally, CRA effectively eliminates artificial rounding and accelerates the learning process.
https://openreview.net/pdf/885375ae024fb0ad2338fe00a1eb658617f6ce3c.pdf
[ { "confidence": 4, "rating": 6, "review_id": "LQrPt7pnIZ", "review_text": "This article finds that the existing UL-solvers will trap into local optima and face rounding issues. This study proposes a continuous relaxation annealing (CRA) strategy and an auxiliary function to facilitate training.\n\n1. The method proposed in the article is sound, easy to implement, and effective.\n2. The article is well-written.\n\nThere are no major drawbacks in this article. There should be more reviews on neural combinatorial optimization solvers that apply annealing ideas as well (such as [1] provides annealing on the distance matrix of TSP).\n\n[1] Lin, Xi, et al. \"\"Continuation path learning for homotopy optimization.\"\" International Conference on Machine Learning. PMLR, 2023.\n\n1. Figure 8 provides parameter analysis for the N=10000 MIS problem. Can parameter sensitivity analysis be provided for other CO problems to demonstrate that CRA can be widely applied to general CO problems without special parameter design?\n2. I am also concerned about the convergence speed under different parameter settings. Could you provide it as a function of the initial\nscheduling and scheduling rate?\n3. Can CRA be applied to routing problems such as TSP? If I understand correctly, the current $\\phi$ function will have very small $p$-values in solving TSP, which may probably lead to a failure situation of CRA." }, { "confidence": 4, "rating": 6, "review_id": "rjIxqlU24v", "review_text": "The proposed approach is an optimization method for each graph over GNN parameters where each output corresponds to the likelihood of the node belonging to the solution. The objective function consists of a penalty term along with a parameter scheduled to control the non-convexity of the objective.\n\n1- The convex annealing approach proposed in training that controls the level of non-convexity. This is a valid approach to avoid getting trapped in local minima where the solution sizes are not large. \n\n2- Theoretical results of the limiting points of the proposed objective with different \\gamma. \n\n3- The \"no-data\" requirement makes this method mostly generalizable, depending on tuning a set of hyper-parameters for each graph distribution.\n\n[Major Comments]\n\n1- The need to solve graph-based NP-hard problems that are originally formulated as ILPs stems from the unscalability of these solvers. For example, the scalability of the MIS problem depends on the number of nodes and the number of edges in the graph. This needs to be the motivation instead of the issues encountered in UL-based solvers.\n\n2- While the proposed approach does not require training data (labeled or unlabeled), there are several hyper-parameters. Tuning these hyper parameters is a challenge. Further discussion is needed here. \n\n3- Getting trapped in local minima is not only the case in GNNs or PI-GNNs. It exists for any continuous relaxation of Problem 1. This is due to the non-convexity inherited in these formulations. For example, if we re-write Problem (3) in matrix form, we can see that the objective has a constant hessian equal to the adjacency matrix of the graph. If the magnitudes and signs of the eigen values vary significantly, then this indicates possible positive and negative curvatures in the loss landscape. Replacing x in Problem (3) with the output of a GNN does not guarantee changes. Although it may be possible that it will make some local minima avoidable by adaptive optimizers (such as ADAM), there is a possibility that this type of overparameterization would create unwanted local minima that do not result in any feasible solutions. Theoretically analyzing this is very complicated due to the use of a GNN. However, empirical investigation can be used to better motivate and understand the proposed approach. \n\n4- Similar to the previous point, rounding issues existed even before GNNs. See the SDP relaxations of MIS [1] and MaxCut [2] and how their dependence on rounding techniques (e.g. spectral clustering [3]) often fails to obtain optimal solutions. Rewriting is needed here. \n\n5- The Stationary point p* = 0_n was not discussed in Section 3.1. Furthermore, in line 208, it is 0_n, whereas in line 2016, it is 0_N. \n\n6- How was the GW approximation applied for the MaxCut problem? This approximation requires a normalized random vector drawn from the standard Gaussian distribution. How many samples were drawn? Given the SDP solution, one can simply draw multiple samples and pick the best where the only requirement is matrix-vector multiplication. This runs extremely fast with (i) no parameters of a NW, and (ii) no hyper-parameters to tune. The scenarios where such approaches fail need to be the motivation to propose the over-parameterized approach with convex annealing. \n\n7- Missing many \"data-independent\" baselines (methods that do not require pre-trained models (such as DIFUSCO [4]) or training data such as RL-based solvers (LwD [5])) for comparison such as ILP solvers (Gurobi, CPLEX, or CP-SAT [6]), sampling methods such as iSCO [7], SOTA heuristics such as ReduMIS [8], and differentiable solvers such as [9]. \n\n8- Why does the paper only consider d-regular graphs? How about the performance on other graphs? How does the run-time of this method scale in terms of the graph order and density? This is a major limitation of this work.\n\n[Minor Comments]\n\n1- What is script C in line 83?\n\n2- What is I and J in the equation after line 86?\n\n3- \"nural\" in line 106.\n\n4- Cite equation 3. An example is [10]\n\n5- Paragraph 149 to 151 is ill-sentenced.\n\n6- “Indeed” in line 232.\n\n7- Cite Potts variable optimization.\n\n8- This study “employs” in line 237.\n\n9- \"are\" in Appendix F.3 in line 247.\n\n[References]\n\n[1] On the shannon capacity of a graph. IEEE TIT, 1979.\n\n[2] Improved approximation algorithms for maximum cut and satisfiability problems using semidefinite programming. JACM, 1995.\n\n[3] A tutorial on spectral clustering. Springer, 2007.\n\n[4] Difusco: Graph-based diffusion solvers for combinatorial optimization. NeurIPS, 2023.\n\n[5] Learning What to Defer for Maximum Independent Sets. ICML, 2020.\n\n[6] https://developers.google.com/optimization\n\n[7] Revisiting sampling for combinatorial optimization. ICML, 2023.\n\n[8] A differentiable approach to the maximum independent set problem using dataless neural networks. Neural Networks, 2022.\n\n[9] A branch and bound algorithm for the maximum clique problem. Computers & operations research, 1992.\n\nSee Weaknesses." }, { "confidence": 4, "rating": 4, "review_id": "x9M5ZRQCV6", "review_text": "This paper aims to tackle shortcomings of the existing unsupervised learning-based solvers for combinatorial optimization, namely the local optima issue and the rounding issue. It proposes a novel technique called continuous relaxation annealing (CRA) strategy which introduces an additional penalty term to smooth the non-convexity of the objective function. This strategy is empirically shown to not only enhance the solution quality but also accelerate the learning process.\n\n1. This paper is an interesting study on the unsupervised-learning based approaches on CO problems. The proposed method is simple but proves to be quite effective.\n2. The empirical evaluation shows that CRA achieves a consistent improvement over PI-GNN \n3. The authors have conducted extensive qualitative and quantitative analysis to help understand the proposed method\n\n1. My main concern lies in the technical contribution from this paper. The whole framework and empirical evaluation is built upon PI-GNN, which makes the observation and conclusion from this paper not generalizable.\n2. I feel the research from this paper is kind of out-of-date. Check https://openreview.net/forum?id=ZMP0Bki9aK for SOTA results on the CO problems considered in this paper. In fact, [1] also mention that simulated annealing would perform better than GNN, but only greedy methods are used as baselines.\n\n\n[1] Maria Chiara Angelini and Federico Ricci-Tersenghi. Modern graph neural networks do worse than classical greedy algorithms in solving combinatorial optimization problems like maximum independent set. Nature Machine Intelligence, 5(1):29–31, 2023.\n\nN/A" }, { "confidence": 1, "rating": 4, "review_id": "lMxD1L8eMK", "review_text": "This paper presents a heuristic method for producing solutions to combinatorial optimization problems, which is based around solving a continuous relaxation of the problem. The main focus of the paper is on an additional penalty term to add to the objective of this relaxation which aims to reward solutions that are closer to satisfying the integrality constraints on the decision variables.\n\nThe computational study seems relatively comprehensive in that it studies a number of different problem settings in a fair amount of detail.\n\nThe paper is very dense and hard to follow, with little context provided to the reader. The method presented and evaluated in the computational study is ultimately an extension of the \"PI-GNN\" solver, but this fact is oddly kind of buried, with only an indirect reference in the introduction (\"the solver that applies the CRA to the PI-GNN solver is referred to as the CRA-PI-GNN solver\", with no indication that this is a main takeaway from the work), and then again at the end of Section 3.2. The paper does not explain in detail or formality what the PI-GNN solver is or how it works (how, specifically, does the CGA actually hook into PI-GNN?), and so a reader without prior familiarity cannot really understand or assess the new contributions laid out in Section 3. Ultimately, I do not feel confident that I can understand, and thus evaluate, the contributions proposed in the paper.\n\nThe main contribution of the paper is an additional penalization term to induce solutions that are feasible w.r.t. the binary constraints on the decision variables, but I do not see explicit discussion in the Experiments section about the feasibility of the solutions produced (wr.t. both the integrality constraints and the other equality/inequality constraints). Are all of the solutions used in the computational study feasible for the \"true\" problem? If there are numerical tolerances used to \"fudge\" exact feasibility, what are these tolerance values?" } ]
yiXZZC5qDI
From Trojan Horses to Castle Walls: Unveiling Bilateral Data Poisoning Effects in Diffusion Models
While state-of-the-art diffusion models (DMs) excel in image generation, concerns regarding their security persist. Earlier research highlighted DMs' vulnerability to data poisoning attacks, but these studies placed stricter requirements than conventional methods like 'BadNets' in image classification. This is because the art necessitates modifications to the diffusion training and sampling procedures. Unlike the prior work, we investigate whether BadNets-like data poisoning methods can directly degrade the generation by DMs. In other words, if only the training dataset is contaminated (without manipulating the diffusion process), how will this affect the performance of learned DMs? In this setting, we uncover bilateral data poisoning effects that not only serve an adversarial purpose (compromising the functionality of DMs) but also offer a defensive advantage (which can be leveraged for defense in classification tasks against poisoning attacks). We show that a BadNets-like data poisoning attack remains effective in DMs for producing incorrect images (misaligned with the intended text conditions). Meanwhile, poisoned DMs exhibit an increased ratio of triggers, a phenomenon we refer to as 'trigger amplification', among the generated images. This insight can be then used to enhance the detection of poisoned training data. In addition, even under a low poisoning ratio, studying the poisoning effects of DMs is also valuable for designing robust image classifiers against such attacks. Last but not least, we establish a meaningful linkage between data poisoning and the phenomenon of data replications by exploring DMs' inherent data memorization tendencies. Code is available at https://github.com/OPTML-Group/BiBadDiff.
https://openreview.net/pdf/85b57c5d8bead49ab7cc6ae03009e613abc6dd86.pdf
[ { "confidence": 4, "rating": 6, "review_id": "NFdJX2bPFi", "review_text": "The paper proposes a new poisoning attack for diffusion models (DMs). While previous work tried to poison/backdoor DMs by altering the training process or the optimization objective, the paper proposes a poisoning attack by only altering the training data. \nTo poison DMs, a trigger is inserted into training images, and the labels of the poisoned samples are changed to the target class. The resulting DM, trained on this poisoned dataset, generates images not aligned with the given prompt or images containing the trigger pattern used for poisoning.\nBased on this behavior, insights are presented that might help protect DMs against poisoning attacks, and a different view on data replication in DMs is given.\n\n- The paper tackles a very important topic as the risk of poisoned data is increasing when training DMs on publicly available data scraped from the web\n- The insight that DMs generate images of the target class with the trigger, even though the trigger has not been present in the target class training images, is very intriguing. However, the paper doesn't really give an intuition or explanation on why this is the case (see questions).\n\n- Only training details about the Caltech15 dataset are provided in the appendix. (see questions)\n- It is unclear how this proposed method can be applied to datasets like LAION or other uncurated/unstructured datasets without clearly separated classes.\n- In the experimental setting, it is stated that experiments on CIFAR-10 are conducted. However, in the experimental evaluation, there are results for CIFAR-10. Only ImageNette and Caltech15 are used to show the effectiveness of the poisoning attack. (see questions)\n- The paper is sometimes hard to read, and in parts, it is difficult to grasp what the authors want to convey as the take-away message of the paper is not really clear, in my opinion.\n- Using the \"poisoned DM\" as a defense against poisoning attacks is not very realistic or applicable, in my opinion. In reality, a DM would first have to be trained to generate data and apply the poisoning detection method to the generated data before even starting to train the classifier. In addition, the improvement of the AUROC for the poisoning detection methods is only very minor (in most cases, less than 1 percentage point improvement of the AUROC value).\n- The data replication experiments are not really meaningful, in my opinion. If we look at replicated images, it is expected that these images are replicated more than randomly chosen images. The experiment would be more meaningful if the same images would be once poisoned and once not poisoned. This would give insight into whether the poisoning really affects the data replication abilities of the DM.\n- there are two other works [1, 2] that use DMs for defending against poisoning attacks that should be mentioned in the related work part \n\n[1] Zhou et al., DataElixir: Purifying Poisoned Dataset to Mitigate Backdoor Attacks via Diffusion Models, AAAI 2024\n[2] Struppek et al., Leveraging Diffusion-Based Image Variations for Robust Training on Poisoned Data, NeurIPS 2023 Workshop BUGS\n\nMisc:\n- Many of the cited papers are arXiv papers and not the conference versions (VillanDiffusion is NeurIPS, \"Text-to-Image Diffusion Models can be Easily Backdoored through Multimodal Data Poisoning\" is \"conference on multimedia\", Rickrolling the artist is NeurIPS, etc.). Please cite the proper conference versions of the papers.\n- The titles in the references only include lower characters. This seems to be a bibtex/latex problem.\n- Reading \"the work [...]\" is not really smooth. Instead, it would be better to write the author's names as \"Chou et al. have done ....\"\n- The links in the appendix should be blue to indicate that they are clickable. I almost missed them. Also, it might be preferable to show the URL so the reader knows which site is linked without clicking on the link in the first place.\n\nThe paper tackles a very interesting problem, and the discovered phenomena seem to be very surprising. However, in my opinion, the paper is not quite ready for publication because of the unclear take-away message and the sometimes hard-to-read text.\n\n**Q1:** How many samples were used to train the DMs on the ImageNette and the CIFAR-10 dataset? \n**Q2:** What are the experimental results for CIFAR-10? \n**Q3:** Why choose the black and white square as the first trigger? Why not just use a uni-colored square as in the original BadNets paper? \n**Q4:** I can imagine that the appearance of the trigger also plays a significant role in whether the poisoning is successful or not. You have chosen the black and white square. Does the same phenomenon also appear with other patterns? \n**Q5:** How many samples were used to calculate the FID scores in Table 2? \n**Q6:** What is the reasoning/intuition behind the phenomenon that the DMs seem to generate an image of the target class containing the trigger, even though the target class images in the training set didn't have the triggers?" }, { "confidence": 4, "rating": 6, "review_id": "V9y0RexaHM", "review_text": "The paper investigates the impact of BadNets-like data poisoning attacks on state-of-the-art diffusion models (DMs) used for image generation. Unlike previous studies that required modifications to the diffusion training and sampling procedures, this work examines the effects of poisoning the training dataset alone. The study uncovers dual effects of data poisoning, which not only degrade the generative performance of DMs but also provide defensive advantages for image classification tasks. Key findings include the misalignment between input prompts and generated images, the amplification of trigger generations, and the linkage between data poisoning and data replications.\n\nThe major contributions of this paper are as follows. It demonstrates that diffusion models (DMs) are vulnerable to BadNets-like data poisoning attacks, leading to two significant adverse effects: (1) misalignment between input prompts and generated images, and (2) an increased generation of images with embedded triggers, referred to as 'trigger amplification'. The study identifies a phase transition in the poisoning effect relative to the poisoning ratio, revealing the nuanced dynamics of data poisoning in DMs. The proposed 'Castle Walls' concept introduces defensive strategies for image classification, including leveraging trigger amplification for detecting poisoned training data, training classifiers with images from poisoned DMs before the phase transition to mitigate poisoning, and using DMs as image classifiers to enhance robustness against attacks. Additionally, the paper establishes a connection between data poisoning and data replication in DMs, showing that introducing triggers into replicated training data exacerbates both the replication problem and the impact of poisoning, thus highlighting the inherent data memorization tendencies of DMs.\n\nOriginality: The paper presents an innovative investigation into the impact of BadNets-like data poisoning attacks on state-of-the-art diffusion models (DMs) used for image generation. Unlike previous studies that require modifications to the diffusion training and sampling procedures, this work uniquely focuses on the effects of poisoning the training dataset alone. This fresh perspective uncovers dual effects of data poisoning, revealing both degradation in generative performance and potential defensive advantages for image classification tasks. The introduction of the 'Castle Walls' concept for defensive strategies is original, offering new ways to leverage data poisoning effects to enhance robustness against attacks.\n\nQuality: The quality of the research is reflected in its comprehensive experimental analysis and the depth of its findings. The study methodically demonstrates the vulnerability of DMs to BadNets-like attacks, detailing how these attacks cause misalignment between input prompts and generated images and amplify trigger generations. The paper includes a thorough examination of defensive strategies, including the innovative use of poisoned DMs for training classifiers. \n\nClarity: The paper is well-structured and clearly communicates its methodology, findings, and implications. The key concepts and contributions are articulated in an accessible manner, with detailed explanations of the experimental setup and results. While there are minor editorial issues, such as the need for clarification in figure captions and consistent notation, these do not significantly detract from the overall clarity of the paper. The inclusion of detailed figures and tables aids in the clear presentation of the data and results.\n\nSignificance: The significance of this work lies in its potential to substantially enhance the understanding and robustness of DMs in the face of data poisoning attacks. By uncovering the dual effects of data poisoning and proposing innovative defensive strategies, the paper provides valuable insights that can inform future research and practical applications. The connection established between data poisoning and data replication highlights the inherent data memorization tendencies of DMs, offering a deeper understanding of their vulnerabilities.\n\nAdditional statistical analysis (e.g., confidence intervals) could strengthen the findings by accounting for variability and ensuring the observed improvements are statistically significant.\n\nExperimental Robustness: The lack of reported error bars due to computational expense raises concerns about the robustness and representativeness of the experimental results. Without statistical measures of variability, it is challenging to assess the reliability of the findings. Constructive suggestion: Provide some supporting evidence or alternative measures to demonstrate the robustness of the results, such as reporting confidence intervals for a subset of the experiments.\n\nComprehensive Defensive Strategies: While the 'Castle Walls' concept is innovative, the practical implementation details of these defensive strategies are not fully explored. Constructive suggestion: Provide more detailed guidelines and examples on how these strategies can be implemented in real-world scenarios to enhance their practical applicability.\n\nIn the figure captions, there is mention of G3 and G4 (that do not contain trigger), but these are not referred to in Figure 2 itself (only G1 and G2 are). Highlight in the text why these are missing and now shown? \n\n Checklist - Q7 Justification: Error bars are not reported because it would be too computationally expensive. How can we have confidence that the experimental results are representative and robust and not prone to statistical chance. Provide some supporting evidence.\n\nWhen non-monotic results are observed (for example Bad-Nets 2 on ImageNette, SD, Caltech15), explain why increasing the poisoning rate from 1 to 5% provides an AUROC improvement but an increase from 5% to 10%. \n\nLine 217, Page 6, Use the same notation as in the paper. “Fig A3 presents” -> A3 of which figure? Provide full reference." }, { "confidence": 4, "rating": 6, "review_id": "4Bqtt0Sj3A", "review_text": "This paper investigates backdoor attacks against diffusion models. Unlike previous works that require both injecting poisoned data samples and manipulating the training loss function, this study focuses solely on poisoning training data samples during the training phase. The research demonstrates that backdoor attacks not only compromise the functionality of diffusion models (resulting in incorrect images misaligned with the intended text conditions) but also amplify the presence of triggers, a phenomenon termed 'trigger amplification.' This trigger amplification can be utilized to enhance the detection of poisoned training data, thereby providing a defensive advantage.\n\n-- This paper is easy to follow.\n\n-- This paper demonstrates that simply poisoning the training dataset can effectively backdoor diffusion models.\n\n-- Conduct comprehensive experiments. Impressive results especially in attack success rate.\n\n-- Discuss the limitation of the proposed attack and future work.\n\n-- The evaluation of the proposed attacks is limited to 3 datasets: CIFAR10, ImageNette and Caltech15.\n\n1. It would be better if the authors can evaluate the proposed method on more datasets such as ImageNet1K and CIFAR-100\n2. It is suggested that the attack model be described in a separate section." }, { "confidence": 3, "rating": 6, "review_id": "fIa0DlLeZW", "review_text": "The paper studies BadNet-like poisoning attacks in diffusion models from both attack and defense perspectives.\n\n1. I think the paper makes interesting observations for the community, especially regarding the phenomenon of trigger amplification.\n\n2. The evaluation seems quite comprehensive, considering multiple datasets, models, attacks, and detection methods.\n\n1. Even though the authors consider many settings, the experiments are run only once (no error bars are shown).\n\n2. While in Table 4, the attacks' success rates are reduced when the poison percentage is up to 5%, I am wondering if they are amplified for higher poison percentages. If so, how could the defender use this as a defense in practice if they do not have any knowledge about the poison percentage?\n\n3. The paper is fully empirical.\n\nMinor comment: at line 252 \"comapred\" should be \"compared\".\n\nSee weaknesses." } ]
yhd2kHHNtB
Avoiding Undesired Future with Minimal Cost in Non-Stationary Environments
Machine learning (ML) has achieved remarkable success in prediction tasks. In many real-world scenarios, rather than solely predicting an outcome using an ML model, the crucial concern is how to make decisions to prevent the occurrence of undesired outcomes, known as the *avoiding undesired future (AUF)* problem. To this end, a new framework called *rehearsal learning* has been proposed recently, which works effectively in stationary environments by leveraging the influence relations among variables. In real tasks, however, the environments are usually non-stationary, where the influence relations may be *dynamic*, leading to the failure of AUF by the existing method. In this paper, we introduce a novel sequential methodology that effectively updates the estimates of dynamic influence relations, which are crucial for rehearsal learning to prevent undesired outcomes in non-stationary environments. Meanwhile, we take the cost of decision actions into account and provide the formulation of AUF problem with minimal action cost under non-stationarity. We prove that in linear Gaussian cases, the problem can be transformed into the well-studied convex quadratically constrained quadratic program (QCQP). In this way, we establish the first polynomial-time rehearsal-based approach for addressing the AUF problem. Theoretical and experimental results validate the effectiveness and efficiency of our method under certain circumstances.
https://openreview.net/pdf/55d4e7f5ce2356d9c39fa5ab1bfe2753b2323f87.pdf
[ { "confidence": 3, "rating": 5, "review_id": "Y883dBPwzX", "review_text": "The paper studies the non-stationary setting in avoiding undesired future (AUF) problems, where environmental shifts can cause the failure of existing AUF methods. It introduces an optimization problem for AUF with minimal action cost in non-stationary environments, formulated as a convex quadratically constrained quadratic program (QCQP) in each interaction. The paper also proposes a rehearsal-based algorithm to solve this problem, providing theoretical guarantees and numerical validations.\n\nThe paper is well-written, introduces a practical and interesting setting for AUF problems, and presents an algorithm with theoretical guarantees and numerical validations to address the task.\n\n(1) The provided algorithms lack a regret bound (or other theoretical guarantees) on the cost (i.e., the objective function), although it guarantees effective alterations (i.e., the constraint). Since the aim of this work is to avoid an undesired future with minimal cost, a regret bound analysis is, in my opinion, important.\n\n(2) In Theorem 3.3, the estimation error depends on the minimum eigenvalues of the empirical error functions' Hessian matrices, which in turn depends on the previously taken alterations. This raises a concern about the exploration-exploitation tradeoff when making alterations. An extreme case is making uninformative alterations (e.g., setting 0 for all nodes), leading to no update by Algorithm 1 and rendering the error bound in Theorem 3.3 meaningless (since $\\mu_j$=0 in this case if I understand correctly). It is unclear how Algorithm 3 addresses this tradeoff and how $\\mu_j$’s can be bounded below.\n\n(1) How do algorithms handle the exploration-exploitation tradeoff (if explorations are needed)?\n\n(2) Is it possible to establish a regret bound for the cost? If not, what are the challenges?" }, { "confidence": 4, "rating": 7, "review_id": "nW2dzVN6gJ", "review_text": "In this paper, the authors address decision-making problem that sufficient interactions are not available. In this case, RL is not suitable. The authors model the structure among the observed variables, and use the structure to help the decisions. Compared to the previous studies [Qin et al. 37], the method can be used in a dynamic environment and can efficiently find the suggested decision (in polynomial time). To deal with the dynamic environment, they introduce the online learning method (Alg. 2). To efficiently find the suggested decision, they convert the optimization problem to a QCQP problem, which can be implemented in polynomial time. The experimental results verify the effectiveness.\n\n1. The method of Qin et al. [37] suffers a high computational cost. In this paper, the authors convert the problem to a QCQP problem, which makes it computable in polynomial time. It is a valuable contribution.\n\n2. Theorem 3.3 presents an interesting and sensible theoretical guarantee. It is novel to see that some traditional online learning methods could be used in such decision tasks.\n\nSome discussion about the offline RL are missing. See Questions for the details.\n\nGiven the results of theorem 3.5: I do not know where $\\tau$ is reflected in your algorithm. It seems that $\\tau$ is never mentioned in Section 3.3. It is a bit wired, and needs more illustrations.\n\nThe writing could be improved. There are some weird sentences. I suggest the authors carefully revise the paper. For example, \"We provide the theoretical guarantees of our method, and experimental results validate the effectiveness and efficiency of the method.\" -> \"We provide the theoretical guarantees for our method. Experimental results validate the effectiveness and efficiency of the method.\"\n\nI agree that RL is not suitable for the setting. However, I am wondering why offline RL cannot be used instead? Relevant discussions are missing.\n\nI can understand that the problem is hard in the non-linear case. Could authors have some discussions for the case that the data is non-linear?" }, { "confidence": 3, "rating": 7, "review_id": "wbX4M1TxHu", "review_text": "The authors formulate the Avoiding Undesired Future (AUF) problem in real-world scenarios of decision-making, especially in non-stationary environments, and propose a method to avoid undesired outcomes with minimal costs. Here the non-stationarity majorly comes from the different costs corresponding to different actions, and the varying influence relations over time. They also provide theoretical guarantees of their method and empirical results demonstrate the effectiveness and efficiency of the proposal.\n\n- This paper is written well and clearly, with intuitive motivation and clarified novelty.\n\n- This paper includes a complete theoretical analysis and algorithmic design. Their proposed problem formalization is more general and practical than existing methods [37]. In particular, they first proposed a sequential method to maintain the dynamical influence, with guarantees of estimation error bound. They entailed Proposition 3.2 and Theorem 3.5 to help find the efficient alteration for $Z_t$ with the minimal cost. They finally propose the whole algorithm called AUF-MICNS, to avoid undesired outcomes in each decision round.\n\n- Experimental results show the effectiveness and efficiency of their proposed algorithm, where the evaluation metrics are success frequency, estimation error, average running time, etc.\n\nI think my major concerns have been settled by the Supplementary Materials. So I have no other comments about the weaknesses.\n\nFor the differences between SRM in the rehearsal graph and SCM in causality:\n- In the linear cases, it is easy to define the coefficients as the influences. When in the nonlinear cases, how to define the influences in the rehearsal graph? Is it the same as in causation, e.g., definitions of causal influence or causal effects? \n- Can the influence in rehearsal graphs (SRM) represent the bi-directional edge information? If not, I am confused what are the differences between such bi-directional relations in rehearsal learning and causality. Though in [35], the bidirectional edges are often due to common causes between two variables, there also exist some works that use causality to represent mutually influenced relationships[1*]. A causal graph can also include cycles.\n- The operators in Figure 2 seem identical to the Intervention operator in causality.\n\n[1*] Vimaleswaran K S, Berry D J, Lu C, et al. Causal relationship between obesity and vitamin D status: bi-directional Mendelian randomization analysis of multiple cohorts[J]. PLoS medicine, 2013, 10(2): e1001383.\n\nThere are other minor typo errors:\n- It seems that in Eq.(1) or Eq.(3), it is better to add $t$ as a subscript for $V_j$ and $\\varepsilon_j$?\n- In line 181, \"ound\" might be \"round\"." } ]
ygDl8q02gA
Optimal Algorithms for Learning Partitions with Faulty Oracles
We consider a clustering problem where a learner seeks to partition a finite set by querying a faulty oracle. This models applications where learners crowdsource information from non-expert human workers or conduct noisy experiments to determine group structure. The learner aims to exactly recover a partition by submitting queries of the form ``are $u$ and $v$ in the same group?'' for any pair of elements $u$ and $v$ in the set. Moreover, because the learner only has access to faulty sources of information, they require an error-tolerant algorithm for this task: i.e. they must fully recover the correct partition, even if up to $\ell$ answers are incorrect, for some error-tolerance parameter $\ell$. We study the question: for any given error-tolerance $\ell$, what is the minimum number of queries needed to learn a finite set partition of $n$ elements into $k$ groups? We design algorithms for this task and prove that they achieve optimal query complexity. To analyze our algorithms, we first highlight a connection between this task and correlation clustering. We then use this connection to build a Rényi-Ulam style analytical framework for this problem, which yields matching lower bounds. Our analysis also reveals an inherent asymmetry between the query complexity necessary to be robust against false negative errors as opposed to false positive errors.
https://openreview.net/pdf/2abac03a655a803c28aaeea7d0cefd63e537be44.pdf
[ { "confidence": 4, "rating": 8, "review_id": "dnLdZh1v8z", "review_text": "This paper studies the problem to recover an exact $k$ partition of a set with access to a same-cluster oracle that is allowed to lie $\\ell$ times. This papers gives an algorithm with optimal query complexity up to constants and a lower bound.\n\n1. The result of this paper is clean and complete. The algorithm's query complexity matches the lower bound up to constants.\n\n2. The main algorithm is concise, simple and elegant. The idea of the algorithm captures the problem well and has proved optimal guarantees.\n\n3. The lower bound is non-trivial and has some interesting ideas.\n\n4. This paper is very well-written. The notations and explanations are clear. I didn't even catch a single typo. Enough background and motivation is included in the paper. The paper is cohesive and organized, easy to follow. Math and algorithmic ideas are explained clearly.\n\n5. I think this paper should be a spotlight.\n\nI'm very satisfied with this paper, just two things I think it can improve on the writing.\n\n1. The algorithm is quite simple and intuitive. On the other hand, the lower bound is more complicated and more technical. I think it might be better to write less on the algorithm but explain more on the lower bound, especially how to construct a good responder's strategy.\n\n2. I think it's worth mentioning what's the optimal algorithm for the no-error oracle and compare your algorithm with theirs. \n\nAlso something I would not like to call it a weakness since I think it's beyond the scope of the paper:\n\n1. You justified the inconsistent error assumption (but I think the consistent error assumption can also be justified). But from the pure theoretical point of view, this assumption does make the algorithm design a lot easier and less interesting since the algorithm can make the same query many times. If the error model prevents such behavior of the algorithm, it is more interesting.\n\n1. Is it possible to get better query complexity bounds if the goal is to recover the partition approximately but not exactly?\n\n2. I mentioned consistent error above. I'm also thinking a more generalized adversarial error where it could be stochastic, for example, the expected number of error is $\\ell$. Does stochasticity makes things harder?" }, { "confidence": 4, "rating": 6, "review_id": "A2PHZGkkdZ", "review_text": "**[Setting]**:\nThis paper studies the problem of clustering n items into k clusters using an oracle that adversarially answers same-cluster queries for item pairs under the constraint that it makes at most $\\ell$ errors for a known constant $\\ell$. The goal is to exactly recover all clusters always (instead of just w.h.p.).\n\n**[Contributions]**:\n1. A lower bound on the number of queries when k is known/unknown. The authors formulate the problem in terms of Chip-Liar game to get this result.\n2. For known k, an algorithm that iteratively merges cluster using two heuristics:\n 1. If there is a (k+1)-clique of all -1s then the oracle has returned at least one false negative\n 2. More than $\\ell$ \"+1\" responses from oracle for a given pair guarantees that it is in the same cluster\n3. Sample complexity of the proposed algorithm that matches the lower bound.\n\nThe results extend to a more general problem where individual limits on false positive and false negative errors are known.\n\nThe paper also studies an algorithm for k-unknown case in the appendix. Sample complexity in this case is not optimal.\n\n1. The problem of guaranteed exact cluster recovery in the presence of noise is new. Having a hard limit on the number of errors made by the oracle makes this possible. Given concrete applications, this would be an interesting direction to explore.\n2. The sample complexity of the proposed algorithm matches the derived lower bound when k is known.\n3. The connection to Chip-Liar game for deriving the lower bound is interesting.\n\n1. The problem setting (oracle making at most $\\ell$ errors with $\\ell$ being a known constant) is not very practical in my opinion, which in-turn makes it hard to judge the significance of the results. Even for the examples given in the paper (L23-32), it is not clear why the oracle will make at most $\\ell$ errors (e.g., an experiment failing in bioinformatics) or why $\\ell$ will be known in advance. Do the authors have concrete applications in mind?\n2. Clarity-wise, while the details in the paper are mostly clear, it would be helpful to include more details from the appendix into the main paper. For example, the following can be included by making Section 3 more concise,\n 1. What does \"The position of a chip on the board will then be equal to the cost of the corresponding partition ..\" (L234-235) mean?\n 2. Some high-level details about the unknown-k algorithm.\n 3. Some intuition about why false-negative and false-positive error budgets inherently have a different contribution towards minimum sample complexity\n\nPlease respond to point 1 under weakness\n\n\n**Minor suggestions**:\n1. Typo in L100 - \"A many\" -> \"Many\" \n2. A more recent paper (Gupta et al. 2024) studies a more general setting than Chen et al. (2023), which is closest to your work.\n\n\nGupta et al. Clustering Items From Adaptively Collected Inconsistent Feedback - AISTATS, 2024" }, { "confidence": 4, "rating": 6, "review_id": "x1gS8JK6YP", "review_text": "The paper studies the problem of finding a hidden partition into $k$ clusters of a given universe.\nIn many applications an algorithm has only access to a same-cluster oracle. A query to this oracle reveals whether two elements belong to the same cluster or not. This problem has been previously studied and tight bounds on the query complexity, i.e., the minimal number of queries required to solve the problem, are known (Reyzin and Srivastava, and Liu and Mukherjee).\nIn this paper, the authors add the realistic assumption that the same-cluster oracle may not always reveal the correct answer. In their model, they (in advance) set a number \\ell which bounds the maximum number of wrong answers which the oracle is allowed to make. The goal of an algorithm is still to compute the hidden cluster with as few queries as possible. In particular, for the same tuple of elements the oracle may give different answers for different oracle calls, and the algorithm does not receive any information on whether the response of the oracle was correct or not. The authors present an algorithm and analyze its query complexity. This bound is generally larger than in the setting with a correct oracle, and depends on the parameter \\ell. If \\ell=0, the presented analysis recovers the results by Reyzin and Srivastava. Furthermore, they give a tight lower bound using an argumentation based on Renyi-Ulam games and correlation clustering.\nThey moreover study a slightly more general setting where the algorithm can set in advance more fine-grained bounds on how many false positive and false negative answers the oracle can give, and, for all problems they consider both, the setting where the number of hidden clusters k is known or not.\n\n- I think that the problem is important and appreciated by the ML-community, as clustering is a fundamental problem in machine learning. Moreover, the assumption that a same-cluster oracle may not always be correct seems quite reasonable and realistic. Thus, I think that this problem and the presented results could have many applications and an impact in certain areas.\n- The authors give a tight analysis of the considered algorithms.\n- Despite being tight up to constants, the main algorithm is well-presented, and easy to understand and implement.\n- Overall, I think that the paper is well-written and seems technically sound.\n\n- I think the main weakness of the model is that the upper bound \\ell on the number of faulty oracle responses must be set in advance and stays fixed. This could be a major drawback when applying this model and the algorithm in practice, because it seems not clear why a faulty oracle should be consistent with such a bound. \n- It seems that the main algorithms is quite similar to the algorithm without faulty oracle. I think it would be helpful for the reader to have a paragraph where the difference to this original algorithm is explained. \n\nFurther comments:\n- Line 236: missing 'and'\n\nIs there anything known for the setting where the number \\ell is unknown to the algorithm, and it only appears in the analysis? I.e. a strong lower bound or an obvious workaround? Such insights or discussions could make the main weakness less severe." }, { "confidence": 4, "rating": 6, "review_id": "Utjluw8gRf", "review_text": "This paper studies the query complexity of clustering with a faulty oracle. Given a set of $n$ points $V$, which is partitioned into $k$ hidden clusters, the learner wants to recover the hidden partition by querying whether two points are in the same clusters or not. There has been a line of work that studies the query complexity of the problem where the response of each query has iid error. This paper studies a different query model, where the learner is allowed to make repeat queries for the same pair of points but the responses could be adversarially flipped at most $\\ell$ times. This paper provides lower bounds for the query complexity of several variants of the problem and also designs efficient learning algorithms with a query complexity matching the lower bound.\n\n1. The paper establishes a novel relation between the clustering problem and the Rényi-Ulam liar games, which could potentially be useful for proving lower bounds for other learning problems.\n2. The algorithm designed in this paper involves non-trivial techniques and has a query complexity that matches the lower bound proved in the paper.\n\nMy main concern is about the significance of the learning model studied in the paper.\nFor graph clustering problems, the error is usually defined over the graph instead of over the queries, and sometimes repeated queries are not allowed. This is because sometimes by allowing the use of repeated queries, the learning problem could be easy to solve. For example, when iid noise is presented.\nIn this paper, the error is defined over an unbounded sequence of queries but only allows a constant number of mistakes to happen. In particular, knowing the number of mistakes seems to be very important to make the learning algorithm designed in this paper work. These two points seem to be too idealized to model problems that arise from real applications.\n\nMy questions are about the weakness pointed out above.\n1. Can you provide any real applications that motivated the study of such a learning model? (Only a constant number of mistakes are made over the queries and such a number is known)\n2. How would the learner know the error parameter $\\ell$ in advance and if we do not have the parameter $\\ell$ as input would it be possible to achieve exact recovery?\n3. If repeated queries are not allowed and the mistakes are placed by an adversary, would it still be possible to (almost) recover the underlying clusters?" } ]
yfQwyxiSJ7
Color-Oriented Redundancy Reduction in Dataset Distillation
Dataset Distillation (DD) is designed to generate condensed representations of extensive image datasets, enhancing training efficiency. Despite recent advances, there remains considerable potential for improvement, particularly in addressing the notable redundancy within the color space of distilled images. In this paper, we propose a two-fold optimization strategy to minimize color redundancy at the individual image and overall dataset levels, respectively. At the image level, we employ a palette network, a specialized neural network, to dynamically allocate colors from a reduced color space to each pixel. The palette network identifies essential areas in synthetic images for model training, and consequently assigns more unique colors to them. At the dataset level, we develop a color-guided initialization strategy to minimize redundancy among images. Representative images with the least replicated color patterns are selected based on the information gain. A comprehensive performance study involving various datasets and evaluation scenarios is conducted, demonstrating the superior performance of our proposed color-aware DD compared to existing DD methods.
https://openreview.net/pdf/20b534cf5fff43e4e9a8229eb66f4841e6dba9df.pdf
[ { "confidence": 3, "rating": 6, "review_id": "MNb021o0Pg", "review_text": "The authors propose AutoPalette, which reduces color redundancy in dataset distillation. They use a palette network and color-guided initialization to enhance training efficiency and performance by minimizing redundant color information in synthetic images and datasets.\n\nColor redundancy is a fundamental aspect of natural scene images but is often overlooked in large-scale image analysis. This study focuses on the missing part, and the proposed method is effective.\n\n- In the abstract, the authors summarize their framework as the one that minimizes color redundancy at the individual image and overall dataset levels. I think that’s a good summary. However, the description is not utilized when they introduce their framework in the main text. Although they describe it in the last section, it would be better to include the summary in the middle of the main, e.g., when introducing an overview or Figure 1.\n\n- I am confused a little about the definition of the color bit in this manuscript. The authors often describe the 8-bits for the original image (e.g., Figure 2). However, if the color bit is based on the number of color palettes, the original image should have 24 bits. \n\n- Typo: \"> can encoded in fewer bits” should be \"can be encoded”\n\n- While watching the condensed images in the Appendices, the CIFAR images are hard to perceptually recognize categories, but easy for Figures 7-9. I’m wondering why this perceptual difference emerges.\n\n- How did you decide the parameters, alpha, beta, and gamma in the experiments?" }, { "confidence": 5, "rating": 7, "review_id": "4deAD6tAl8", "review_text": "This paper introduces a straightforward yet effective dataset distillation method called AutoPalette. The method minimizes color redundancy at both the individual image level and the entire dataset level. At the image level, it trains the palette network by maximizing color loss and palette balance loss, thereby reducing color redundancy in images. At the dataset level, a color-guided initialization strategy is proposed to minimize color redundancy across the entire dataset. Extensive comparative and ablation experiments convincingly demonstrate the approach's effectiveness.\n\n- The proposed method outperforms other dataset distillation methods in most tasks, providing a new perspective on dataset distillation.\n- The experiments and ablation study seem well done. The paper's experiments are comprehensive, and the results of the ablation studies are convincing.\n\n- The paper could benefit from a more detailed explanation of the color loss and palette balance loss. It would be helpful to include an explanation of why the palette balance loss might achieve a more balanced color palette.\n- The paper does not seem to explain why the similarity between the last layer gradients is measured instead of directly measuring the feature level similarity in the Color Guided Initialization Module.\n\n- How does the efficiency of this method compare to other methods?\n- Why does directly optimizing the task loss lead to assigning pixels to a limited number of color buckets in lines 156-158?" }, { "confidence": 2, "rating": 5, "review_id": "oIQYxFgdks", "review_text": "The paper titled introduces AutoPalette, a novel framework for dataset distillation (DD) that focuses on minimizing color redundancy at both the individual image and overall dataset levels. Authors propose a palette network to dynamically allocate colors from a reduced color space to each pixel, ensuring essential features are preserved. Additionally, a color-guided initialization strategy is developed to minimize redundancy among images, selecting representative images based on information gain. Comprehensive experiments on various datasets demonstrate the superior performance of the proposed color-aware DD compared to existing methods.\n\n1. Color quantization is an interesting way for dataset distillation, the motivation of this paper is interesting.\n2. The methodology is well-defined, with clear explanations of the palette network and the color-guided initialization strategy.\n3. The framework is shown to be compatible with other DD methods, indicating its potential for broad application.\n\n1. The paper does not discuss the potential impact of the method on the performance of larger dataset beyond the CIFAR-10 and CIFAR-100. These 2 datasets are two small and could not show the effectiveness of the proposed method.\n\n2. There is limited exploration of how the method handles imbalanced datasets or classes with unique color distributions.\n\nSee weakness." }, { "confidence": 3, "rating": 5, "review_id": "q72mxAnJWf", "review_text": "This paper introduces ColorPalette, a framework that minimizes color redundancy at the individual image and overall dataset levels. At the image level, the palette networks generate condensed images in reduced color bit-width while at the dataset level, a color-guided initialization strategy is proposed. The experiments are done using various datasets and IPCs.\n\n1. A new direction for exploring DC is proposed. \n2. AutoPalette explores the possibility of performing DC in a reduced color space.\nThe paper is easy to understand.\n\n1. AutoPalette seems like it is built on top of [1] with DC loss.\n2. Lack of experiment on large-scale dataset ImageNet-1K.\n\n[1] Learning to Structure an Image with Few Colors, Yunzhong Hou et al.\n\n1. How is the performance of AutoPalette on ImageNet-1K?\n2. Since the method falls into the parameterization category, given an IPC storage size, how many samples does AutoPalette generate?\n3. In Table 1, why AutoPalette inferior to DATM on CIFAR-100 at 50 IPC?" } ]
yeFx5NQmr7
Learning 3D Garment Animation from Trajectories of A Piece of Cloth
Garment animation is ubiquitous in various applications, such as virtual reality, gaming, and film producing. Recently, learning-based approaches obtain compelling performance in animating diverse garments under versatile scenarios. Nevertheless, to mimic the deformations of the observed garments, data-driven methods require large scale of garment data, which are both resource-wise expensive and time-consuming. In addition, forcing models to match the dynamics of observed garment animation may hinder the potentials to generalize to unseen cases. In this paper, instead of using garment-wise supervised-learning we adopt a disentangled scheme to learn how to animate observed garments: 1). learning constitutive behaviors from the observed cloth; 2). dynamically animate various garments constrained by the learned constitutive laws. Specifically, we propose Energy Unit network (EUNet) to model the constitutive relations in the format of energy. Without the priors from analytical physics models and differentiable simulation engines, EUNet is able to directly capture the constitutive behaviors from the observed piece of cloth and uniformly describes the change of energy caused by deformations, such as stretching and bending. We further apply the pre-trained EUNet to animate various garments based on energy optimizations. The disentangled scheme alleviates the need of garment data and enables us to utilize the dynamics of a piece of cloth for animating garments. Experiments show that while EUNet effectively delivers the energy gradients due to the deformations, models constrained by EUNet achieve more stable and physically plausible performance comparing with those trained in garment-wise supervised manner.
https://openreview.net/pdf/d57b0731216ccd13a02117aa1f63730ec58dae56.pdf
[ { "confidence": 4, "rating": 4, "review_id": "x8iEhXVfVx", "review_text": "The authors propose a method to transfer the deformations of the observed garments to any other garment. Previous methods either rely on a large-scale dataset for training or analytical physics model with limited expressive ability. On the contrast, the proposed method first learns the constitutive relations from the observation by a neural network (EUNet), then use it as an energy prior to regularize the training of garment deformation model. This design addresses the limitation of previous works and show better results.\n\nThe strength of the paper is the proposed method does not need to collect huge amount of data with varied body pose and shape and garment types for training. Through theoretical analysis, they prove that they can learn a more physically accurate energy model to describe the deformation of garment. In this way, they do not need an explicit physical model, which tends to have limited expressive power. The derivation is theoretical sound.\n\nTo learn this energy model by EUNet, the authors rely on the synthetic data simulated with blender. A cloth with known geometry (vertices and faces) is assigned with a specific material type. However, this setting is too ideal. In real scenarios, we are more interested in transferring the material of a real cloth to another garment. But having the geometry of a real cloth usually is not feasible. Even though we can have the mesh of the cloth through some registration process, how to get the shape of mesh when it is hanging and dangling is still a problem. In this paper, I do not see the possibility of using the proposed method in real applications. This is the critical weakness.\n\n1. In Fig. 4, the results of MGN-S+EUNet on dress are not similar to the ground truth data. \n2. What is the unit of the errors in Table 1? The errors on the leather and denim seem too big compared with the others." }, { "confidence": 2, "rating": 6, "review_id": "eK0pObHgUC", "review_text": "This submission presents a method that could effectively learn the dynamic patterns of different garments from a single piece of cloth. The key insight is that the motion of different cloths is governed by both external forces and the constitutive relations rather than specific garment topologies. Thus, an Energy Unit Network (EUNet) is proposed to learn the topology independent constitutive laws. Then the animation is performed by energy optimizations. Experimental results shows improved results comparing to previous methods and baseline methods.\n\nThe paper is well written and easy to read.\n1)\tThe paper is well structured.\n2)\tMany terms are well defined and explained.\nThe proposed method is both novel and interesting.\n1)\tThe disentangled learning scheme of using a network to learn the constitutive law, which generalizes to different garment types, is physically intuitive and natural. More importantly, this design helps alleviate the needs for large amount of training data of various cloth shapes in dynamic for learning based animation. This disentanglement between topology and energy is achieved by using mesh edge as a unit instead of the whole cloth mesh.\n2)\tThe proposed disturbance training strategy helps stabilize the training and improves the generalization of EUNet. As a constraint, it accompanies the direct supervision on the energy form by taking into account the physical meaning of equilibrium state. This helps the network to learn a more reasonable manifold of the energy distribution.\nExperimental results:\n1)\tImproved results over previous methods are shown both qualitatively and quantitatively.\n2)\tAccording to the ablation study, the design of both the contrastive loss and dissipation unit loss are validated as effective.\n\nSome details regarding the design of the EUNet is missing. Although some descripsions on the EUNet design is provided at the experimental section, it is relatively hard for the reader to follow and develop a more coherent understanding of the presented work.\n\nSome limitations and open questions that the work might not cover:\nWhat about anisotropic materials? How to adapt the current model design to also fit to cloths where its material is anisotropic? How well does the method handles cloths with more complex topology that goes beyond a single layer of cloth? As also pointed by the authors, the method does not handle self-collision. It would be interesting to see how it can be adapted in that axis.\n\nSee weakness." }, { "confidence": 4, "rating": 5, "review_id": "GeqUseexGi", "review_text": "This work proposes a method to learn the constitutive model of cloth materials from observed cloth trajectory using a neural network. It adopts an MLP that operates on individual edges and predicts per-edge distortion based on the deviation of edge geometry from rest shape and trains the network using a combination of supervision on potential energy change with ground truth and optimality of incremental potential. The learned potential energy can be used as a constraint to train neural simulators for garment animation.\n\n- I appreciate the novelty in the idea of learning the constitutive model of cloth materials in a data-driven manner. Potentially this formulation could allow the neural networks to understand the intrinsic physical property instead of mimicking the behavior of specific examples, thus of scientific significance if implemented correctly.\n- The paper is well-written and mostly clear.\n\n- On the methodology side, the major question is probably the design of dissipative energy. On the one hand, why it is and is merely a function of \\(X^t - X^{t-1}\\) is questionable. In fact, whether it should be modeled as an absolute quantity is a question because the total amount of dissipative energy seems not that meaningful. The only observable quantity is the relative change of dissipative energy in a physical process.\n- On the other hand, with the presented framework, it is very hard to learn the major sources of energy dissipation: collision, and friction, since they are neither present in the training data, nor fully modeled (e.g. self-collision) in the formulation. While the dissipative energy is not the focus, the problem is that without correctly modeling dissipative forces, I doubt the possibility of learning an accurate elastic potential energy function.\n- On the evaluation side, the problem is that the method is only evaluated in a simplified setting, without comparing against methods or in settings that are practically useful (see below). In my opinion, there are two ways to demonstrate that the learned constitutive model is useful: either 1. demonstrate that it is more accurate than an analytical model on real data, or 2. show that it leads to more realistic animation than existing methods (including traditional numerical models).\n- The evaluation section only shows that the MGN trained with the learned constitutive model is better than those trained with ground truth garments or analytical constitutive model. On the one hand, it does not compare with other state-of-the-arts like HOOD, SNUG, and PBNS that are also formulated in a self-supervised manner. On the other hand, the claim that it is better than the analytical constitutive model is not convincing because the discrepancy may be caused by the limited accuracy in the neural simulator (or even by the mini-network mentioned in Sec 3.3). To truly demonstrate that it is better than an analytical one, it must be compared using a numerical integrator that is guaranteed to converge to the energy minimum.\n\nSee the weaknesses section." }, { "confidence": 4, "rating": 5, "review_id": "LWM8LI6ZjA", "review_text": "The paper proposes a novel method for animating garments by learning from a single piece of cloth. This approach circumvents the need for large-scale garment datasets, which are resource-intensive and time-consuming to create. The core idea is to use a disentangled scheme where constitutive behaviors are learned from observed cloth and then applied to animate various garments. The proposed Energy Unit Network (EUNet) captures constitutive relations in the form of energy, bypassing the need for traditional physics models.\n\nThe paper introduces a novel disentangled approach that separates the learning of constitutive behaviors from the animation process.\n\n The EUNet models constitutive behaviors using energy units, allowing for direct learning from observed cloth trajectories without traditional physics models.\n\nThe approach significantly reduces the data requirement, relying on a single piece of cloth for training, making it more practical and less resource-intensive.\n\nThe method produces animations that are both robust and generalizable, capable of handling various garment types and materials.\n\nThe energy optimization process, although effective, can be computationally intensive and may require fine-tuning to achieve optimal results.\n\nThe paper would benefit from more extensive experimental validation, including comparisons with a broader range of existing methods and more diverse garment types.\n\nHow does the performance of EUNet compare with traditional physics-based models in terms of computational efficiency and accuracy?\n\nWhat are the limitations of using a single piece of cloth for training, and how can these limitations be mitigated in future work?" }, { "confidence": 4, "rating": 4, "review_id": "Qu8ZgCzB3Q", "review_text": "This paper proposes to learn garment dynamics using a disentangled learning framework and the Energy Unit Network (EUNet). Instead of relying on extensive garment datasets, the approach learns constitutive behaviors from a single cloth piece and dynamically animates garments through energy optimization.\n\nThe writing is clear and technical details are described clearly. The visual aids and diagrams are well-integrated, enhancing understanding.\n\nMy main problem with the paper is that the problem of learning/recovering cloth dynamics from structured sample tests has been studied intensively for a long time, and the authors seem not to be aware of this whole field. This is a well-studied problem, and the authors do not position their method against significant prior works. Many existing works have also attempted learning from real-world fabric sample tests or indirect representations (video), which is a much harder problem. To mention a few:\n\n1. \"Predicting the Drape of Woven Cloth Using Interacting Particles\" Breen et al., 1994\n2. \"Estimating Cloth Simulation Parameters from Video\" Bhat et al., 2003\n3. \"Data-driven elastic models for cloth: Modeling and measurement\" Wang et al., 2011\n4. \"How Will It Drape Like? Capturing Fabric Mechanics from Depth Images\" Rodriguez-Pardo et al., 2023\n5. \"Estimating Cloth Simulation Parameters From Tag Information and Cusick Drape Test\" Ju et al., 2024\n\nThe authors should thoroughly review the literature and reposition their contribution and provide experimental comparisons against existing works. Additionally, the literature review section does not include important works from the physics simulation community. Including these references and discussing how the proposed method builds upon or differs from them would strengthen the paper significantly.\n\n1. Literature Positioning: Can you clarify your awareness and positioning of your method in relation to the existing body of work on learning/recovering cloth dynamics from structured sample tests? As discussed above, many significant studies in this area were not referenced.\n2. Physics Simulation/Graphics Community References: The literature review section did not include important works from the physics simulation and graphics community. How does your approach relate to or differ from the significant contributions in this field? Including a discussion of this could provide a better contextual grounding for your work.\n3. Experimental Validation: Can you provide more details on how your experiments validate the proposed method against these existing works? Specific comparisons and metrics would help clarify the effectiveness and novelty of your approach. Can you validate your approach against different datasets, including synthetic datasets generated from different simulation engines as well as real-world datasets used by current works?" } ]
ybiUVIxJth
Policy Aggregation
We consider the challenge of AI value alignment with multiple individuals that have different reward functions and optimal policies in an underlying Markov decision process. We formalize this problem as one of *policy aggregation*, where the goal is to identify a desirable collective policy. We argue that an approach informed by social choice theory is especially suitable. Our key insight is that social choice methods can be reinterpreted by identifying ordinal preferences with volumes of subsets of the *state-action occupancy polytope*. Building on this insight, we demonstrate that a variety of methods — including approval voting, Borda count, the proportional veto core, and quantile fairness — can be practically applied to policy aggregation.
https://openreview.net/pdf/fbe1222a75cee72847eef7783a09fb1b0b56c748.pdf
[ { "confidence": 3, "rating": 6, "review_id": "43feIc7uDw", "review_text": "This paper joins a long list of recent work that studies how to aggregate the preferences of several agents (e.g., humans) in a reinforcement learning framework inspired by social choice theory. The problem is modeled as a multi-objective MDP with $n$ different reward functions. The authors propose to use the state-action occupancy measure instead of each agent's most preferred policy or reward function directly. Popular concepts from social choice theory, such as the Borda count rule and approval voting, are then studied from this perspective.\n\n- The paper is well-written and easy to read. \n- It appears that considering the state-action occupancy measure has some advantages over working directly with each agent's optimal policy or reward function when attempting to introduce social choice rules, which---even though a standard approach in RL---is interesting.\n\n- In my opinion, the contributions of this work are limited. E.g., only the full information case is studied\n- The primary justification of this work (which is repeatedly mentioned in the paper) is that prior work on policy aggregation and fair RL is not invariant to affine transformations of the reward function. Essentially, agents can have differently scaled reward functions, which makes, e.g., maximizing for social welfare a bad objective. However, I don't understand why we cannot simply normalize the reward function of each agent, so that the reward functions are directly \"comparable\". I find the concern about affine transformations quite weak.\n\n- I'm a bit surprised about the title of the paper since you're not aggregating policies, but reward functions. In fact, the policies play a minor role in the paper, since you look at the preference relation induced by the reward function (which you then express in term of occupancy measures). Could you explain why what you're doing is policy aggregation and not just preference aggregation? \n\nTypo: \"Policy Aggergation\" in title of Section 5" }, { "confidence": 3, "rating": 6, "review_id": "5c8K7pIBFv", "review_text": "The paper solves the problem which arises in preference aggregation of individual policies to a collective policy – (1) summation based aggregation are sensitive to affine transformations and (2) voting rule based aggregation faces problem of policies being exponential in S. Towards solving this, the paper proposes voting over continuous space of alternatives (which eliminates affine sensitivity) and a volumetric definition of preference ordering. The paper next proposes efficient algorithms to (1) find approximate volumetric veto core and (2) approximate q-Quantile Fairness. They also show complexity of existing voting rules, notably plurality voting and borda count. They show that problem is computationally hard for plurality and open for broad count. I am inclined towards accepting the paper.\n\nThe paper solves a well-motivated problem of policy aggregation. Their proposal of achieving different notions of approximate fairness through efficient algorithms both novel and appears to be sound. Their theoretical analysis of the complexity of using plurality and borda count based voting is also significant and allows scope for future work in the direction. Their algorithms have been validated through experiments.\n\nIn the experimental section, using a common metric to quantify the “level of fairness” guaranteed by different algorithms would be beneficial for a more learned comparison. \n\nIn Def. 4 should the expression be vol(O’)/vol(O) >= 1 – veto(S) + epsilon instead of vol(O’) >= 1 – veto(S) + epsilon as currently stated?\n\nsee weakness.\n\nWhy can't yours be a special case of Noothigattu et al. [27]?" }, { "confidence": 3, "rating": 7, "review_id": "F81v7XEdNB", "review_text": "This paper studies aggregating multiple policies–which can be seen as a formalization of the task of aligning an AI system to the values of multiple individuals. When the number of states is small (such as, when multiple individuals have to select one out of a few candidates), this problem has been widely studied in voting and social choice theory, and there are many efficient aggregation rules (such as the Borda count). This paper, however, considers the other extreme: where the state-action space is huge, and it is not obvious how to design efficient methods to aggregate policies. \n\nThe main insight of this paper is that preferences over policies have a volumetric interpretation in the state-action policy space that, in some cases, leads to efficient aggregation algorithms. \n\nConcretely, the authors examine two types of aggregation rules: (1) two aggregation rules that are known to have desirable fairness properties (namely, proportional veto code and, the recently introduced, quantile fairness) and (2) voting or score-based rules such as Borda count and $\\alpha$-approval voting rule. \n\nBuilding on their insight the authors prove several results, including \n1. an algorithm which finds the policy wrt an $\\epsilon$-approximation of the proportional veto core using $O(log(1/\\epsilon))$ queries to a volume computation oracle, \n2. the existence of $q$-quantile fair policies for all $q\\geq 1/e$ (which is tight and stronger than the best possible bound in the discrete case),\n3. NP-hardness and inapproximability results for $\\alpha$-approval score.\n\nThe paper is well-written and easy to read. I believe that the problem proposed in the paper is well-motivated from alignin AI systems, and is of significant interest to research on voting rules and social choice theory. The theoretical results are solid and the proofs and/or approach are well outlined. Finally, I did not check the proofs in detail, but they appear sound. One caveat is that I am not familiar with the closely related prior work (e.g., [6]) and, so, cannot comment on the novelty of the proofs and results from prior work.\n\nI am not sure if the empirical results section is adding any value to this paper: it evaluates different aggregation rules, but I think this is not the focus of this work–I think the focus is to design efficient algorithms and/or prove existential results. If other reviews and the area chairs agree, my suggestion is to drop the empirical results section and use the additional space to add more exposition on the proofs. To be clear, this is no a significant concern for me.\n\nI do not have any specific questions for the authors." } ]
ybMrn4tdn0
Auditing Local Explanations is Hard
In sensitive contexts, providers of machine learning algorithms are increasingly required to give explanations for their algorithms' decisions. However, explanation receivers might not trust the provider, who potentially could output misleading or manipulated explanations. In this work, we investigate an auditing framework in which a third-party auditor or a collective of users attempts to sanity-check explanations: they can query model decisions and the corresponding local explanations, pool all the information received, and then check for basic consistency properties. We prove upper and lower bounds on the amount of queries that are needed for an auditor to succeed within this framework. Our results show that successful auditing requires a potentially exorbitant number of queries -- particularly in high dimensional cases. Our analysis also reveals that a key property is the ``locality'' of the provided explanations --- a quantity that so far has not been paid much attention to in the explainability literature. Looking forward, our results suggest that for complex high-dimensional settings, merely providing a pointwise prediction and explanation could be insufficient, as there is no way for the users to verify that the provided explanations are not completely made-up.
https://openreview.net/pdf/da6c6a3020c220d933287ab79ebff6b0838a4941.pdf
[ { "confidence": 4, "rating": 7, "review_id": "HSFh9MEtD2", "review_text": "The paper addresses the challenges in verifying the accuracy of local explanations for machine learning models, especially when the model is not fully known and access to it is limited. The primary focus is on minimizing the number of times the model and explainer are accessed during the auditing process. The contributions of the paper are as follows:\n\n**C1. Defining Auditing of Local Explanations:** The paper provides a formal definition of what it means to audit local explanations. It sets the groundwork for systematically evaluating the reliability of these explanations, which are crucial for understanding and trusting machine learning models.\n\n**C2. Importance of the Region’s size**: It highlights that the region to which the local explainer applies is a critical property. Understanding the scope and limits of where the explanation is valid is essential for accurate auditing. This insight helps in identifying when and where an explanation might fail to represent the model correctly.\n\n**C3. Bounds on Auditing Complexity**: The paper establishes both lower and upper bounds on the sample complexity required for auditing local explanations. These bounds are presented as functions of the newly identified property, which is the region’s size. This provides a theoretical framework for understanding the minimal and maximal data requirements for effective auditing.\n\n**S1. Framework for Auditing**: It proposes a theoretical framework for auditing local explanations, which is a significant step towards developing more rigorous and reliable methods for verifying the trustworthiness of explanations provided by machine learning models.\n\n**S2. Identification of Key Metrics**: The introduction of the explainability loss function, provides a quantifiable measure for evaluating local explanations for the original model, offering a systematic way to assess explanation quality.\n\n**S3. Highlighting the Importance of Locality**: The analysis which provide upper and lower bounds highlights the importance of the \"locality\" of explanations, bringing attention to a previously underexplored aspect in the explainability literature.\n\n**W1. Lack of Evaluation**: The paper does not include an evaluation on real-world datasets. Although the authors suggest that their results could have significant practical implications (“Our results might have far-reaching practical consequences”), an initial step should be to perform evaluations on actual data to validate their findings.\n\n**W2. Limited Discussion with Previous Research**: The paper could benefit from a more thorough discussion of how its findings relate to and build upon previous research in the field. Specifically, the authors mentioned that [Dasgupta 2022] is the most similar to their work, except it is limited to discrete explanations rather than continuous. However, it is not clear whether, in the discrete setting, the proposed work aligns with Dasgupta’s consistency metric, sufficiency metric, or if it does not coincide with any of these previous metrics.\nIt may also be worth discussing and comparing with the recent work of [Bassan 2023], which suggests a verification method for finding minimal sufficient explanations.\n\n- Dasgupta, S., Frost, N. and Moshkovitz, M., 2022. Framework for evaluating faithfulness of local explanations. In International Conference on Machine Learning\n- Bassan, S. and Katz, G., 2023. Towards formal XAI: formally approximate minimal explanations of neural networks. In International Conference on Tools and Algorithms for the Construction and Analysis of Systems\n\n**W3. Focus on a Specific Type of Explanations**: In the paper, the presentation is a bit misleading because it suggests that explanations can be general. However, the focus is on one type of explanation – those that approximate the true model on a region of examples. This is not a general (local) explanation method, even though it includes quite a few types of explanations.\n\nSee Weaknesses." }, { "confidence": 4, "rating": 4, "review_id": "H0LYhm54h5", "review_text": "This work studies an auditing framework in the eXplainable Artificial Intelligence (XAI) area. Specifically, the authors consider the scenario where a group of third-party auditors or users attempt to perform a sanity check on the provided explanations. The framework allows the auditors to query the model prediction and local explanations. Based on the proposed framework, this paper presents a theoretical analysis of the sample complexity of auditing.\n\nThis paper targets a very important aspect of XAI studies. It considers the deployment phase where users do not trust the provided explanations. This is a usually overlooked perspective in the XAI community.\n\n1. This work focuses on local explanations defined in section 1.1 L48-49 and section 2.2 L155-162. These presumptions limit the scope of this paper to the surrogate model method (such as LIME, MUSE [1], etc.), where a glass-box explainer is used to approximate black box’ predictions in the neighborhood of input data. This greatly limits the impact of this work as such surrogate-model explanation methods only take a very small part of local explanation methods. Local explanations are not limited to surrogate model methods. It can refer to explanations regarding individual input samples instead of the entire data manifold or even regarding the model itself [2]. \n2. In the context of this paper, the authors claim that gradient-based explanations are surrogate model explanation methods (i.e. “local explanation method” under the definition of this paper) in L181-188. The authors define that $g_x(x) = (\\nabla_xf(x))^Tx$, which is the summation of input x gradient attributions. This corresponds to the prediction $f(x)$ only if $f$ satisfies homogeneity [3]. On the contrary, suppose $\\phi_f(x)\\in\\mathbb{R}^d$ is the explanation of SHAP, then $g_x(x):=(\\phi_f(x))^T\\mathbf{1} = f(x)$ can accurately reflects the prediction. Therefore, the definition of the explainers studied in this work is ambiguous and may require more rigorous considerations.\n\nIn summary of points 1 and 2, the formulation of the framework in this work is flawed. \n\n3. There is no empirical verification of the proposed theoretical results, which significantly undermines the contribution of the theoretical analysis. Note that a continuous function is always bounded on the closed neighborhood of x. Therefore, it is essential to empirically test whether the proposed bounds are tight. A theoretical demonstration is also appreciated.\n\n4. [minor] To stay consistent, “Lime” in L337 should be revised to “LIME”.\n\n5. While the motivation that users/auditors may not trust the explanation system and want to audit the model is an interesting and realistic setup, the proposed framework lacks practical contributions. Specifically, the formalism described in L64-66 and L236-242. can be difficult to satisfy.\n\n**Reference**\n\n[1] Lakkaraju, H., & Bastani, O. (2020, February). \" How do I fool you?\" Manipulating User Trust via Misleading Black Box Explanations. In *Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society* (pp. 79-85).\n\n[2] Adebayo, J., Gilmer, J., Muelly, M., Goodfellow, I., Hardt, M., & Kim, B. (2018). Sanity checks for saliency maps. *Advances in neural information processing systems*, *31*.\n\n[3] Hesse, R., Schaub-Meyer, S., & Roth, S. (2021). Fast axiomatic attribution for neural networks. *Advances in Neural Information Processing Systems*, *34*, 19513-19524.\n\n1. L51-52: Why do the authors claim LIME to be a gradient-based method?\n2. Definitions 2.1 and 2.2 are limited to black box $f:\\mathbb{R}^d\\rightarrow \\{\\pm 1\\}$. Can this constraint be relaxed to more general settings? For example, is it limited to a binary decision boundary?\n3. The main theoretical results of Theorem 4.1 have many presumptions that are not justified. For example, why is the “user-specified” local error threshold assumed to be 1/3?" }, { "confidence": 4, "rating": 6, "review_id": "RMFTNfIAag", "review_text": "The paper proposes an auditing framework to verify the truthfulness of explanations by a third-party in scenarios where there is no trust. Bounds on sample complexity are provided that depend on the locality (minimum local mass) of the explanation. Further, the authors discuss that for gradient-based explanations in higher dimensions, locality tends to be small to achieve a reasonable explanation loss. Smaller locality increases the provided bounds on the amount of data required for the audit.\n\n1. The topic of the paper is important for policymakers and the XAI research community in general, as it suggests that in natural scenarios where there is no trust, it is difficult to verify whether the local explanation is truthful to the model without actually knowing the model. \n2. The paper provides upper and lower bounds on sample complexity for auditing local explanations. \n3. The analysis includes gradient-based explanations and discusses how to generalize to other methods, including LIME and Anchors.\n\n1. Regarding the soundness of the auditing framework, could you please comment on the motivation for the company to provide all of the required data (especially local regions) to the auditor? When the requested dataset is sufficiently large, the third-party company could potentially recover the model along with all the classifier outputs, local regions, and other information.\n2. Figure 1 is hard to fully understand without reading the paper. It’s not intuitive which data points are explained and why, in panel (b), there is sufficient data for the audit. Could you please provide more explanation in the figure caption or simplify the figure?\n3. Section 5 and Theorem 5.1 present an existence proof. However, the example considered in Figure 2 (a) is very specific. Can you elaborate on how often you expect this data distribution to occur in real-world datasets or discuss the value/range of locality for other likely data distributions?\n\nMinor:\n1. $E(f, x_i)$ is used in Section 1 but defined in Section 2.\n2. Please specify $\\epsilon_1$, $\\epsilon_2$, $\\delta$, and $\\gamma$ in Theorem 4.2 or mention that you are operating under the same conditions as in Theorem 4.1, if applicable. Also, Algorithm 1 is referenced before it is defined.\n\n1. Do you have any guidance or suggestions on how users should choose the local error threshold, $\\gamma$? \n2. How useful are the bounds in Section 4 for reasonable values of locality and low explanation loss, such as when the decision boundary is nearly locally linear? In this case, can you provide any empirical estimates on the lower and upper bounds in Theorems 4.1 and 4.2?" }, { "confidence": 3, "rating": 7, "review_id": "6PwQBAaZ0x", "review_text": "This paper provides theoretical results on how many queries are required for an auditing framework for local explanations of machine learning algorithms (e.g., neural networks).\n\nThe paper is well motivated with a widely interesting and relevant topic. The approach is theoretical, and rigor is provided through proofs in appendices. The authors connect their work to popular algorithms: Gradient based approaches, Lime, and Anchors.\n\nThe authors acknowledge that the local loss estimation is, in their argument, only a necessary but not sufficient condition for trust. They do not establish or reference any evidence that manipulation would result in the local loss as a good indicator of untrustworthiness. As a result, the analysis serves more as a potential validation scheme for the limited types of algorithms that meet their linearity requirement (e.g. Anchors or LIME). This drastically narrows the scope and implications of their analysis. Unless this can be firmly established the title, abstract, and conclusions of the paper should be amended to reflect the correct scope of its claims\nFurthermore, there are plenty of reasons (and examples) where interpretation methods are demonstrated to be frail to the *input* (e.g. Ghorbani). This would likely not pass the audit but would not be evidence that the $E$ has been altered. I think this speaks to some confusion in the setup of the paper as to what “trust-worthiness” is. The authors present it as trust between the user and provider rather than trust in the robustness of the explainability metric which is, I argue, closer to what their results seem to reflect.\nAdditionally, it is not clear to me that this is the only way to test explainability metrics with limited access (e.g. data points, classifier outputs, and local explanations). For example, these metrics are popular because they match so well to human “expert” knowledge. You show someone the pcitre of the shark from Smilkov et al and they agree that they see “shark-like” shapes in the SmoothGrad output. Consequently, one could imagine the case where sampling in a *non-local* fashion would trace whether the same features matching human “experts” appear. \nUltimately the proposed methodology seems entirely impractical (which is sort of the point).\n\n-\tCan you establish or provide a reference hat show how the local loss / explainability loss is a good indicator for manipulation from an adversarial attack (or disingenuity design)? This is critical for the scope and implications of your analysis.\n-\tPlease comment on whether the proposed auditing scheme is indeed the only way to establish a local explainer (fitting your constraints) taking into account the suggestions above.\n-\tCan you comment on how the results might \n-\tL 117 please change reference to Shap to the acronym SHAP\n-\tL 337 please change reference to the Lime algorithm to its acronym: LIME\n-\tL 440 where does the 2592 come from in the denominator for the bound on $n$?\n-\tHow well would the framework work if you chose some K examples to audit around instead of drawing sample i.i.d.?" } ]
ybLXvqJyQA
Predicting Ground State Properties: Constant Sample Complexity and Deep Learning Algorithms
A fundamental problem in quantum many-body physics is that of finding ground states of local Hamiltonians. A number of recent works gave provably efficient machine learning (ML) algorithms for learning ground states. Specifically, [Huang et al. Science 2022], introduced an approach for learning properties of the ground state of an $n$-qubit gapped local Hamiltonian $H$ from only $n^{\mathcal{O}(1)}$ data points sampled from Hamiltonians in the same phase of matter. This was subsequently improved by [Lewis et al. Nature Communications 2024], to $\mathcal{O}(\log 𝑛)$ samples when the geometry of the $n$-qubit system is known. In this work, we introduce two approaches that achieve a constant sample complexity, independent of system size $n$, for learning ground state properties. Our first algorithm consists of a simple modification of the ML model used by Lewis et al. and applies to a property of interest known beforehand. Our second algorithm, which applies even if a description of the property is not known, is a deep neural network model. While empirical results showing the performance of neural networks have been demonstrated, to our knowledge, this is the first rigorous sample complexity bound on a neural network model for predicting ground state properties. We also perform numerical experiments that confirm the improved scaling of our approach compared to earlier results.
https://openreview.net/pdf/83a1900fe49a28fcac07aec59f2f276432aaaff2.pdf
[ { "confidence": 2, "rating": 5, "review_id": "nKBG9d3cwm", "review_text": "The paper presents two novel machine learning algorithms for predicting ground state properties of quantum systems with constant sample complexity, independent of system size. The first algorithm modifies an existing ML model, while the second introduces a deep neural network model, both showing improved scaling in numerical experiments.\n\n1. The introduction of a deep learning model with rigorous sample complexity bounds is a significant contribution to the field. The constant sample complexity, regardless of system size, is particularly noteworthy and addresses a critical challenge in quantum many-body physics.\n\n2. The authors provide numerical experiments that validate the theoretical claims. The experiments demonstrate the practical effectiveness of the proposed algorithms, especially the deep learning model, which outperforms previous methods.\n\n1. The training objective for the neural network is non-convex, which poses challenges in finding a global optimum efficiently. The paper does not address how to overcome this issue or guarantee convergence to optimal weights.\n\n2. While the paper claims improved computational complexity, the actual implementation details and computational resources required for the deep learning model are not thoroughly discussed.\n\n1. The reviewer would appreciate if the authors could elaborate on how the performance of the deep learning model generalizes to Hamiltonians that extend beyond the specific cases examined in the numerical experiments.\n\n2. Can the authors provide more insights into the practical implementation of their algorithms, particularly regarding the initialization and regularization procedures used during training? This will be helpful for readers reproduce the results of the paper." }, { "confidence": 3, "rating": 7, "review_id": "g4cN13iX2J", "review_text": "In this paper, the authors focused on utilizing deep learning methods to predict the ground states. They made an important assumption that brings theoretical improvement to achieve constant sample complexity in the training data. They also made two main alternations to the learning model compared to previous literature, including incorporating Pauli coefficients in feature mapping and utilizing kernel ridge instead of Lasso. Numerical results for up to 45 qubit systems are provided, supporting the theoretical findings.\n\n1. High-quality paper with rigorous theoretical findings and comprehensive numerical results.\n2. Improved the sampling overhead to constant complexity, independent of the system size.\n3. Explored new possibilities for predicting the ground state properties of quantum many-body systems using neural network models.\n\nThe main concern is that the improvement of this paper against precious works is limited. The main theoretical finding is based on an additional assumption that we know the property we'd like to predict in advance. The proposed learning method has only two minor alternations. These issues prevent me from giving a higher evaluation score, but they do not overshadow the fact that this article is of high quality.\n\nNo questions" }, { "confidence": 4, "rating": 3, "review_id": "5w5Z4s4zfD", "review_text": "This paper studies the sample-efficient learnability of properties of grounds states of local Hamiltonians. Ground states of local Hamiltonians are hard to compute, even for quantum computers and to circumvent this hardness, several recent works proposed learning the trace inner product of local observables with the ground state given labeled training data. This setting is exactly PAC learning, i.e. given labeled data from a worst-case distribution, the goal is to get low prediction error wrt the same distribution on future samples. The best sample complexity for this problem is known to be log(n) 2^{polylog(1/eps)}, shown by Lewis et al. \n\nThe main questions addressed in this work are (1) whether the sample complexity can be improved to be independent of the system size and (2) whether there are rigorous guarantees for learning properties of the ground state via neural network based algorithms.\n\nThe paper provides several technical results on the representation and learnability of ground-state properties. The improved sample complexity follows from tweaking the algorithm is [2] and making additional assumptions about the training distribution. The neural net sample complexity result proceeds in two steps. First the authors prove an approximation-theoretic result for functions that look like ground state properties and show they can be well-approximated by neural networks. They then obtain a generalization bound using fairly sophisticated technical machinery.\n\nI think the paper does not resolve the questions it claims to resolve and does so in a slightly camouflaged way. \n\n1. Question 1 in the paper asks whether you can get sample complexity that is independent of system size for learning properties of ground states, aka the PAC learning setting for ground states of local Hamiltonians. The answer obtained is yes, under two crucial caveats, the observable is known in advance and the distribution over the training data is not worst-case. This diverges significantly from the PAC learning model. The same critique holds for Question 2. Further, reference [1] does not state this as an open problem. \n\n2. The assumptions on the training distribution are not stated upfront and do not appear to be mild. The assumptions include the distribution g to be strictly non-zero on [-1,1], and zero outside; g being continuously differentiable; and component-wise independent. Are there any natural distributions that satisfy all these properties simultaneously? \n\n3. There is no discussion of why each of these properties is needed, which ones are crucial to the argument and which ones are for technical convenience. Having 4 non-trivial technical assumptions of the pdf is a major weakness, especially since it makes the result incomparable to prior work [1,2], where the setting is truly PAC learning. \n\n4. In the numerical experiments, I see no discussion of what distribution was used to generate the training data, and how many of the technical conditions this distribution satisfies. It remains unclear to me what the experimental section is trying to convey, since it does not complement the main theorems. \n\n5. When is it reasonable to expect labeled data for the observable / property you want to learn? What is a real-world scenario where one would expect to obtain such labeled data? \n\n6. Is there a way to get some non-trivial sample complexity (not necessarily system independent) bound for PAC learning via neural networks, without the extra assumptions on the training distribution? \n\n7. There are completely unjustified claims such as the neural network achieving low loss and finding a bounded magnitude solution after constant many steps and O(n) time. It is not clear why this should ever be the case.\n\nI included the questions with the weaknesses." }, { "confidence": 1, "rating": 5, "review_id": "vgJg4NLzGm", "review_text": "This work builds upon the work of Huang et al. and Lewis et al. by introducing two new approaches to get constant sample complexity for predicting properties of a ground state of a many-body local Hamiltonian. The two new approaches are a modified ML model that requires knowledge of the property of interest and a deep neural network model that does not need prior knowledge of the property. In this paper, the authors provide both proves and small experimental evaluations to show that both approaches achieve constant sample complexity, independent of system size.\n\nThe paper is well-organized and clearly written. The paper includes rigorous theoretical guarantees and small numerical experiments to confirm the efficacy of the proposed methods compared to the existing algorithm.\n\nEven though it is a strong paper, the issue addressed here is a specific case that builds upon two other papers. Additionally, the related published works are mostly, if not all, published in physics journals. I do not see why the results shared in this paper are valuable to share with the broader NeurIPS community, especially since the mathematical proofs are very rigorous, I would expect that it is not accessible to the broader audience. Some assumptions and conditions required for the theoretical guarantees may also limit the applicability of the results.\n\nHow do the parameters or phases of the random Heisenberg model affect the training performance?" }, { "confidence": 2, "rating": 5, "review_id": "cXmmEzhITD", "review_text": "In this work, the authors give two algorithms that predict (geometrically local) properties of ground states of gapped geometrically local Hamiltonians. This problem has been introduced by Huang et al. [HKT+22], and the previous best known algorithm is given by Lewis et al. [LHT+24], which uses $\\log(n)$ samples, where $n$ is the number of qubits in the Hamiltonian. This paper further improves on the $log(n)$ sample complexity, and gives two algorithms that only use a constant number of samples. The first algorithm is modified from the algorithm of [LHT+24], changing the regression part of the algorithm from LASSO to kernel ridge regression. The second algorithm uses deep neural network, having the advantage of not needing to know the observables in advance, but requires more restriction on the distribution of the Hamiltonian parameters. The authors complement their theoretical results with numerical simulations.\n\n[HKT+22] Huang, Hsin-Yuan, Richard Kueng, Giacomo Torlai, Victor V. Albert, and John Preskill. \"Provably efficient machine learning for quantum many-body problems.\" Science 377, no. 6613 (2022): eabk3333\n[LHT+24] Lewis, Laura, Hsin-Yuan Huang, Viet T. Tran, Sebastian Lehner, Richard Kueng, and John Preskill. \"Improved machine learning algorithm for predicting ground state properties.\" nature communications 15, no. 1 (2024): 895.\n\nThe work achieves the optimal sample complexity of the problem and is written in good English.\n\nThe part of preliminaries that are restating definition and result of [LHT24+], is not well written, and I believe has led to a critical bug to the first algorithm. In particular, Theorem 8 claims that for every $O\\sum_{P} \\alpha_P P$ that can be written as sum of geometrically observables, $\\sum_{P} |\\alpha| =O(1)$. However, the counterpart in [LHT24+] has extra restrictions: $||O||_{infty}=1$ and $O$ need to be inside a radius of $R=O(1)$. Therefore, where the authors uses Theorem 8 in equation (B.28) to bound the kernel, the result is incorrect since they do not have $R=O(1)$.\n\nOther minor inconsistencies includes: \n\nline 642: $S^{geo}$ not defined\n\nline 660: $h_{c(j)}$ not defined\n\nSome typos:\n\nline 121: geometrically [local] observable\n\nline 148: || \\omega || -> || w ||" }, { "confidence": 3, "rating": 7, "review_id": "17zvwTlzEn", "review_text": "The authors propose an ML based method to predict properties of ground states of quantum systems which comes with provable guarantees. Improving on recent work by Huang et al and Lewis et al, they give sample complexity bounds which are independent of the number of qubits. This approach is applicable when the observable one is trying to predict is predetermined. The authors also suggest a deep learning based approach for the case where the observable is not known in advance. They support their theoretical wok with numerical experiments.\n\nThe paper adresses an important probelm, and is well written and argued. The authors clearly explain the previous state of the art in ML based prediction of ground state properties, as well as their own contribution. Their proposed modification to the procedure suggested by Lewis et al., which results in Theorem 1 of the paper, seems interesting and worthwhile. Likewise the guarantees obtained for the training of a custom Neural Network architecture are intriguing from a learning theoretic perspective.\n\nIt is unclear to me how the Neural Network generalization result compares to known results in the literature- the setting which the authors study is quite specific and thus it is not easy to relate the result they obtained to those in the theoretical deep learning literature.\n\nHow restrivtive is the assumption about the dependence of each local term on a fixed number of parameters? It would be instructive to give physically relevant examples where this does and does not hold.\nPresumably the results would not apply to standard Neural Network architectures- what specifically would break in the analysis? \nHow does the sample complexity depend on the constant the authors assume the network weights are bounded by?" } ]
ybHPzL7eYT
Large Spatial Model: End-to-end Unposed Images to Semantic 3D
Reconstructing and understanding 3D structures from a limited number of images is a classical problem in computer vision. Traditional approaches typically decompose this task into multiple subtasks, involving several stages of complex mappings between different data representations. For example, dense reconstruction using Structure-from-Motion (SfM) requires transforming images into key points, optimizing camera parameters, and estimating structures. Following this, accurate sparse reconstructions are necessary for further dense modeling, which is then input into task-specific neural networks. This multi-stage paradigm leads to significant processing times and engineering complexity. In this work, we introduce the Large Spatial Model (LSM), which directly processes unposed RGB images into semantic radiance fields. LSM simultaneously estimates geometry, appearance, and semantics in a single feed-forward pass and can synthesize versatile label maps by interacting through language at novel views. Built on a general Transformer-based framework, LSM predicts global geometry via pixel-aligned point maps. To improve spatial attribute regression, we adopt local context aggregation with multi-scale fusion, enhancing the accuracy of fine local details. To address the scarcity of labeled 3D semantic data and enable natural language-driven scene manipulation, we incorporate a pre-trained 2D language-based segmentation model into a 3D-consistent semantic feature field. An efficient decoder parameterizes a set of semantic anisotropic Gaussians, allowing supervised end-to-end learning. Comprehensive experiments on various tasks demonstrate that LSM unifies multiple 3D vision tasks directly from unposed images, achieving real-time semantic 3D reconstruction for the first time.
https://openreview.net/pdf/ee1ddc8c08fa974be5694c1cffee72f12da261ad.pdf
[ { "confidence": 4, "rating": 7, "review_id": "uJLsnYID5d", "review_text": "The authors proposed the Large Scene Model (LSM), a novel 3D scene understanding framework that unifies multiple vision tasks within a single model. LSM represents a scene using pixel-aligned point maps, integrating geometric, appearance, and semantic information into a unified representation. By leveraging a Transformer architecture with cross-view and cross-modal attention, the model effectively incorporates multi-view cues and semantic knowledge from a 2D vision model.\n\nLSM's design enables efficient scene-level 3D semantic reconstruction and rendering in real time on a single GPU. The model's integration of a 2D semantic model allows for open-vocabulary understanding, extending its applicability to diverse real-world scenarios. Furthermore, by consolidating multiple tasks within a single model, LSM minimizes error propagation, leading to more robust and accurate results compared to state-of-the-art baselines.\n\n* The described technical approach in this work is sound and clearly presented. The contributions from the various proposed modules are well ablated and investigated in the experiments (Table 4).\n* The model demonstrates high inference efficiency compared to other approaches. with reconstruction time of 0.1s and rendering at 270 due to the underlying 3DGS representation that is being generated.\n* I like that the model reconstructs the underlying 3D representation in a single feedforward pass, as compared to multiview + test time optimization for fusion approaches. This improves the speed and efficiency for inference. It is good to see compelling quality based on the novel view synthesis.\n\n* I think the main contribution of this paper is the unification of the various scene modeling tasks into the same model, including geometry, color and semantics. The authors further claimed in the abstract and introduction that multitask training end-to-end allows LSM to outperform state-of-the-art baselines. However the paper did not ablate the multi-task learning design choice. For instance, what if some of the tasks are removed (e.g., semantic feature prediction). How does that affect the performance of the other tasks?\n* A suggestion is that for Figure 5, it is unclear how much pose divergence there is between the input source view and the synthesized novel view. It would be helpful to also show the source view supplied as input to the model.\n* The paper is named Large Scene Model, which seems to suggest something to do with model parameter scaling, hence large. However the paper does seem to do much scaling on model size. So perhaps a more accurate terminology would be Multitask or Unified Scene Model? \n\nNits.\n* Line 153: Typo: to serve as input?\n* In Tables 1-4, I suggest highlighting the best (and possibly second-best result) for easier comparison of the various experiments.\n* In Table 4, why is + Multi-scale Fusion indented?\n\n- For the quantitative comparisons given in Figure 3, are they predicted from the input view, or are they for a novel view?" }, { "confidence": 4, "rating": 5, "review_id": "236S6RR30f", "review_text": "This paper presents the Large Scene Model (LSM), which generates semantic radiance fields from uncalibrated RGB images using a unified Transformer-based framework. LSM can infer geometry, appearance, and semantics simultaneously and synthesize label maps in real-time. The model integrates multi-scale fusion and features from 2D models to enhance accuracy and efficiency.\n\n1. Unified Framework: LSM combines multiple 3D vision tasks into a single framework, streamlining the process and reducing complexity.\n\n2. Real-time Performance: The model achieves real-time 3D reconstruction and rendering, suitable for applications needing fast processing.\n\n3. Enhanced Feature Fusion: By incorporating 2D model features, LSM improves the quality of feature lifting and semantic understanding, enhancing overall performance.\n\n1. Dataset: I recommend the authors organize the training and testing phases in alignment with previous methods (NeRF-DFF and Feature-3DGS) and provide results on the Replica Dataset. The authors have not sufficiently justified deviating from the baseline evaluation split. Furthermore, an explanation is needed for the significant performance discrepancy of the baselines between the Replica Dataset and the authors' setup. Additional training details may also be necessary.\n\n2. Writing: The paper's abstract, introduction, and methods sections require improvement. Specifically, the methods section should introduce each module and their interconnections from a high-level perspective rather than presenting them as isolated components.\n\n3. Method Details: Do the authors use camera parameters? If so, why are camera parameters mentioned in line 117? If camera parameters are used, the model cannot be described as \"unposed.\"\n\n4. Visualization: In Figure 4, there are category colors that are not listed in the legend. Additionally, a more diverse set of results should be displayed, as the current experimental set predominantly features sofas.\n\n1. Module Timing: I am curious about how the authors manage to use eight cross-attention modules and still achieve reconstruction in 0.1 seconds. Please provide the time consumption for each step.\n\n2. Image Resolution: What is the resolution of the images? More details regarding the inference process should be provided, especially concerning the time comparison." }, { "confidence": 4, "rating": 6, "review_id": "RUyjosT8U6", "review_text": "The paper aims to train a network that takes in a set of unposed images and directly produces a semantic radiance field.\n\nThe method utilizes a single Transformer-based model that learns the attributes\nof a 3D scene represented by a point-based radiance field. A decoder produced 3D Gausians that can be splatted to make novel images, depth estimates, and semantic segmentations.\n\nThe paper provides a transformer architecture for producing 3D gaussians with rich features from unposed images, which seem very valuable. The design choices in the proposed system are well-chosen from methods available at this time, leading to a system that has a good combination of little-compute and competitive-accuracy on three different tasks (nvs, depth, semantics).\n\nThe paper shares goals and ideas with \"Scene Representation Transformers\" (Sajjadi et al., CVPR 2022) and its follow up work Object Scene Representation Transformer (NeurIPS 2022) and RUST: Really Unposed SRT (CVPR 2023). This paper is different, because it ultimate produces a set of gaussians rather than a LLFF or NeRF volume, and it distills features from 2D foundation models. However, it is similar in that a transform encoder and decoder produces a scene representation directly from a set of images, which is then used for novel view synthesis, depth estimation, and semantic segmentation. In any case, those paper seem fairly similar in concept and so I think they should be discussed in the related work, and possibly approach sections.\n\nThe ablation study in table 4 suggests that the key methods in the paper have little impact on the results of NVS.\n\nIt is interesting that the 3D methods, which have access to multiple views of the same scene do not perform as well as LSeg in Table 1. This is counter-intuitive. Can you please explain why?\n\nThe results on multiview depth accuracy are kinda amazing. Why is the proposed method better than ones that take the camera parameters? Is it due aligning the scene scale (do all the methods get the same method for scale alignment?\n\nThe novel view synthesis images look very good. Can you please provide some info about how close the novel cameras are to the reference ones provided at inference time? Is there a way for you to quantify and compare to PixelSplat the NVS results as the novel cameras deviate further and further from the reference ones?" }, { "confidence": 3, "rating": 5, "review_id": "8FKKJkujx3", "review_text": "This paper solves the sparse-view scene reconstruction problem by Large Scene Model, a unified scene reconstruction model via unposed RGB images. The model utilizes a ViT backbone for extracting the feature and uses cross-view attention to align the multi-pose feature for consistent features. The 3D scene is further rendered from the 3D semantic field derived by the multi-view features. The unified model is capable of multiple 3D-based tasks including novel view synthesis and 3D language-based segmentation. Experiments showed that the work achieves better results with limited performance sacrifices in the NVS task and higher performance in the multi-view language-based segmentation task.\n\n1. The model is general and multi-purpose in sparse-view scene reconstruction.\n2. The model can achieve better results while still obtaining lighting-fast rendering speed and can be applied to real-time reconstruction.\n\n1. The technical contribution is limited. The model is generally designed via multi-purpose modules glued attention and Transformers, which is a straightforward and widely applied idea. There is no significant new problem has arisen and novel solutions proposed.\n2. The performance comparison with NVS-related works is limited. Firstly, the authors train and run comparison experiments on the same dataset, which can be biased. Secondly, several popular scene datasets incorporated in similar works (such as RealEstate10k) are not utilized in this work. Thirdly, methods similar to pixelSplat such as Splatter Image[1] are not included in comparison.\n3. The presentation can still be improved. Firstly, the authors titled their work “Large Scene Model”, while the design is more similar to the idea of pixel-based Gaussian splatting (such as pixelSplat and GaussianImage). Secondly, each module's input and output data type cannot be directly recognized from the pipeline graph. \n4. The bibliography of this paper lacks some related works, such as Splatter Image[1], which is also an image-space Gaussian splatting method. \n\nReference: \n[1] Szymanowicz, Stanislaw, Chrisitian Rupprecht, and Andrea Vedaldi. \"Splatter image: Ultra-fast single-view 3d reconstruction.\" In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 10208-10217. 2024.\n\n1. Did the authors try replacing the designed module with large-scale pretrained models, such as using pretrained monocular depth estimation model?\n2. The design philosophy is similar to multi-view image generation works. Can this model output high-quality and consistent multi-view images, as the fashion of Free3D? \n3. The term of “language-driven segmentation” is not quite clear to me. Does it mean semantic segmentation?" } ]
yaYJlpidX1
Metalearning to Continually Learn In Context
General-purpose learning systems should improve themselves in open-ended fashion in ever-changing environments. Conventional learning algorithms for neural networks, however, suffer from catastrophic forgetting (CF)---previously acquired skills are forgotten when a new task is learned. Instead of hand-crafting new algorithms for avoiding CF, we propose Automated Continual Learning (ACL) to train self-referential neural networks to meta-learn their own in-context continual (meta-)learning algorithms. ACL encodes continual learning desiderata---good performance on both old and new tasks---into its meta-learning objectives. Our experiments demonstrate that, in general, in-context learning algorithms also suffer from CF but ACL effectively solves such "in-context catastrophic forgetting". Our ACL-learned algorithms outperform hand-crafted ones and existing meta-continual learning methods on the Split-MNIST benchmark in the replay-free setting, and enables continual learning of diverse tasks consisting of multiple few-shot and standard image classification datasets. Going beyond, we also highlight the limitations of in-context continual learning, by investigating the possibilities to extend ACL to the realm of state-of-the-art CL methods which leverage pre-trained models.
https://openreview.net/pdf/815ad575cd94846104fec394cf865896fb55599c.pdf
[ { "confidence": 4, "rating": 5, "review_id": "TP1Wzlfae3", "review_text": "The paper focuses on Automated Continual Learning which is different than handcrafted continual learning. It uses self referential neural networks to meta learn their own in-context continual learning algorithm. First, the paper shows the emergence of in-context catastrophic forgetting. Second, the paper analyze the performance of proposed method (ACL) and finally the paper discuss the limitation of the proposed method.\n\n- The paper is clearly written and easy to follow\n- The paper introduces original idea of Automated Continual Learning\n- The paper identifies \"in-context\" catastrophic forgetting\n\n- The paper claims to do in-context continual learning but the concept of in-context learning is not clearly explained.\n- The paper mainly focus on two task and five task settings but it would be more helpful to see the more different settings such as three task or four task\n- How is the size of SRWM affects the maximum sequence length that can be train?\n\n- Why only consider the two-task setting?\n- Why ACL was not compared with replay buffer based methods?\n- What is the architecture of SRWM?" }, { "confidence": 4, "rating": 5, "review_id": "dkAa52QMaY", "review_text": "The paper describes a method for in-context continual learning (CL) by using a type of meta-learning neural architecture based on ‘self-referential weight matrices’ (SRWM). Proposed in prior work, these models learn to modify weight matrices iteratively as they process more and more inputs. In this work, they are given few-shot examples from different tasks and iteratively update the weight matrices as the examples are processed. This update process is referred to as “in-context” learning in this work. The key innovation is to define the loss function of SRWM training to optimise for both forward (improving performance of subsequent CL tasks) and backward (improving performance of previous CL tasks) transfer while achieving good performance on the current task. Experiments are conducted on commonimage classification meta-learning benchmarks such as Split-MNIST and Mini-ImageNet. Results show the proposed method prevents catastrophic forgetting (without using replay), outperforming existing meta-learning baselines on the evaluated benchmarks.\n\nStudies the problem of in-context catastrophic forgetting via a two-task toy setting and reveals the issue when training with no backward transfer loss term. This is shown to be mitigated by including the backward transfer loss term.\n\nProposes an in-context CL method using models based on SRWM and a novel loss to mitigate catastrophic forgetting as more tasks are learned. The method does not use a replay buffer.\n\nStudies and covers standard image classification meta-learning tasks such as Split-MNIST, FashionMNIST, and CIFAR-10. On Split-MNIST, shows improvements over existing CL and meta-baselines in both domain and class incremental evaluation settings. The improvements, when additional 5-task fine-tuning is used, is significantly above baselines. \n\nThe paper is clearly written, with thorough literature review.\n\nOne weakness of the proposed method is that the number of loss function terms increases with the number of CL tasks, as pointed out by the authors in Appendix A.5. This prevents this method from being scaled to more practically relevant settings where a large number (much more than 2 or 3 that this paper has mostly focused the experiments on) of tasks are considered in a CL setting. Method of reducing the loss terms would strengthen the paper.\n\nAnother weakness, which is also noted by the authors in Table 4 and Section 4.3, is that the performance of the proposed model and method is poor compared with those based on pre-trained transformer models, even on an easier evaluation task. The authors in Section 5 also discuss a potential connection between LLM transformer training as an implicit version of the proposed model and method. Given these existing strong and more widely adopted methods, it is unclear how much value the proposed method adds. SRWMs are not widely used and LLMs training can scale to a massive number of tasks with a single loss [1] (albeit not CL). A more detailed explanation of the application of the findings of this paper beyond those interested in SRWMs would be helpful.\n\nAnother weakness of this paper is its focus on image classification meta-learning tasks only. It is helpful to know the generality of this method, for example on language modelling tasks or multimodal tasks. An experiment demonstrating the method in CL language tasks would be helpful.\n\n[1] Finetuned language models are zero-shot learners. Wei et al. ICLR 2022.\n\nNone" }, { "confidence": 3, "rating": 5, "review_id": "UuL3QQ5U7Z", "review_text": "The paper studies the problem of catastrophic forgetting (CF) by formulating continual learning (CL) as learning from a sequence of demonstrations of tasks. The paper proposes a meta-learning objective function that includes backward transfer terms. These terms compute the error of the predictor on previous tasks after receiving demonstrations of the current task.\n\n- The approach of formulating (continual learning) CL as learning from a sequence of demonstrations of tasks is interesting.\n- The experiment shows positive results when compared to non-meta-learning approaches\n\n- The paper is difficult to follow. Many definitions and the algorithm are not very well explained.\n - The motivation of formulating (continual learning) CL as meta-learning is not well presented.\n - Some details of the architecture are mentioned in the background section only (e.g. replacing self-attention with SRWN and the multi-head version.)\n - The details of the training and inference process are not well presented.\n- The training process can be very costly and poorly scaled with the number of tasks and the number of examples per task. In each step over a sequence of demonstrations, the method needs to compute and store a new weight matrix in order to perform back-propagation. It might require more memory during training and at inference.\n- Even being a meta-learning approach, the model still needs fine-tuning when given a new task to adapt to a new number of tasks.\n\n- Can the authors explain more on the following claim “The stability-plasticity dilemma are automatically discovered and handled by the gradient-based program search process.“ (line 52)?\n- What are the advantages of this method compared to previous approaches?\n- How do the number of examples per task and order of tasks during the training affect the performance at inference time?\n- How does the method scale with the number of tasks in terms of performance and computation?\n- It’s unclear how to calculate the loss function in a batch fashion since each training point requires a different sequence of inputs (depending on the position of the task in the sequence) and loss components." }, { "confidence": 4, "rating": 5, "review_id": "tM1T7H8k7y", "review_text": "The paper proposes a novel technique to automatically discover in-context continual learning dynamics for image classification task sequences through meta-learning. In order to achieve this purpose, the approach relies on 2 main novelties: \n* Using self referential weight matrices on top of an image encoder - SRWM, as self-modifying that adapts itself to the stream of inputs, is an natural model for continual learning. \n* Encoding continual learning desiderata in the meta-objective, i.e. backward and forward transfer. \n\nThe authors first apply the approach in a classic two-task setting (Split-MNIST) that allows them to showcase and analyse the emergence of in-context catastrophic forgetting phenomena, and to show that using their ACL loss can help reduce it. They further evaluate their method and compare them to replay-free baselines from the CL and meta-CL literature, showing an advantage of their approach in scenarios with up to 3 tasks. \n\nThe authors further test the limits of their approach by comparing it to more recent learning to prompt techniques for continual learning, leveraging the power of pretrained large models. This scenario show a limitation of the technique in more complex scenarios with more tasks, more diverse and complex data.\n\n* The paper takes an interesting perspective on continual learning, leveraging the interesting properties of SRWM and the capability of meta-learning to encode the desired behavior in the meta-learning objective. The combination of these two contributions is novel to the best of my knowledge, and lead to interesting insights. \n\n* The approach leads to interesting performance in relatively simple scenarios, outperforming some of the existing continual learning techniques. \n\n* I also particularly appreciated the authors discussion of the method limitations. Both the experiments with learning to prompts and the discussion provide very valuable insights that can help building on the work in the future.\n\n* In my opinion, the main limitation of the approach is its practicality. From the experiments reported in Table 4, it seems that the approach requires to met-train on a sequence of similar length and/or complexity to provide its potential. This is not possible to know in advance in practice. Moreover, one limitation that the authors have not mentioned is that the meta-objective seems to require keeping in memory a number of copies of the model that is equal to the number of tasks. This can quickly become cumbersome for real applications that can require more complex models and very long sequences of tasks. \n\n* While the authors focus on classic benchmarks for continual and meta-learning, these benchmarks are artificial, relatively simple and lack of diversity. Different works highlight the limits of these benchmarks, I invite the authors to look at \"Meta-Album: Multi-domain Meta-Dataset for Few-Shot Image Classification\" Ullah et al. 2023, and \"NEVIS'22: A Stream of 100 Tasks Sampled from 30 Years of Computer Vision Research\" Bornschein et al. 2023 for examples of more realistic benchmarks. \n\n* It would be interesting to add a discussion of the cost of the approach (computation, memory, ...). Even is it gives a substantial boost in many cases, it would be interesting for practitioners to compare what they gain to what they pay.\n\n* The approach is focused on the task aware scenario, and rooted in a notions of tasks. In many practical scenarios, the distribution shift occurs in a softer way, with no clear notion of task boundaries. Can the authors comment on the possibility of extend their approach to the task-agnostic scenario?" } ]
yXpfrLMIr2
Binarized Diffusion Model for Image Super-Resolution
Advanced diffusion models (DMs) perform impressively in image super-resolution (SR), but the high memory and computational costs hinder their deployment. Binarization, an ultra-compression algorithm, offers the potential for effectively accelerating DMs. Nonetheless, due to the model structure and the multi-step iterative attribute of DMs, existing binarization methods result in significant performance degradation. In this paper, we introduce a novel binarized diffusion model, BI-DiffSR, for image SR. First, for the model structure, we design a UNet architecture optimized for binarization. We propose the consistent-pixel-downsample (CP-Down) and consistent-pixel-upsample (CP-Up) to maintain dimension consistent and facilitate the full-precision information transfer. Meanwhile, we design the channel-shuffle-fusion (CS-Fusion) to enhance feature fusion in skip connection. Second, for the activation difference across timestep, we design the timestep-aware redistribution (TaR) and activation function (TaA). The TaR and TaA dynamically adjust the distribution of activations based on different timesteps, improving the flexibility and representation alability of the binarized module. Comprehensive experiments demonstrate that our BI-DiffSR outperforms existing binarization methods. Code is released at: https://github.com/zhengchen1999/BI-DiffSR.
https://openreview.net/pdf/bdcbf03ed2b041a6f0730d46d010316bdcad8da7.pdf
[ { "confidence": 4, "rating": 6, "review_id": "WbAyBpXqW8", "review_text": "The paper introduces BI-DiffSR, a novel binarized diffusion model for image super-resolution, designed to accelerate the inference speed and reduce computational costs of diffusion models while maintaining high performance. It proposes a UNet architecture optimized for binarization, featuring consistent-pixel downsampling/upsampling and channel-shuffle fusion to address dimension mismatch and fusion difficulty, alongside a timestep-aware redistribution and activation function to adapt to varying activation distributions across different timesteps. The model demonstrates superior results over existing binarization methods, approaching the perceptual quality of full-precision models with significantly reduced memory and computational requirements.\n\n- The paper is well-written and easy to understand.\n\n- This paper designs a novelty 1-bit UNet for accurate binarized diffusion model, including:\n - New downsample module and upsample module for Dimension Consistency.\n - Channel shuffle module to balance the activation value ranges of two input features.\n - The timestep-aware redistribution (TaR) and timestep-aware activation function (TaA)\n\n- Experiments achieve the state-of-the-art in super resolution with diffusion.\n\n- The basic BI-Conv block lacks novelty, which is as the same as the binarized module in ReActNet that contains RSign and RPReLU.\n\n- TaR uses different parameters for different time steps, but in the mean while, the normal time embedding is projected into the resblock, it is also a time-aware on feature maps, what is the differences or why TaR works?\n\n- SR3 is not a new diffusion baseline for super resolution, ResShift[1], SinSR[2] should be better, and the metrics of PSNR, SSIM, LPIPS is much old, the CLIPIQA, MUSIQ, MANIQA should be better for evaluating the performance of generative super resolution.\n\n- Self-attention and MLP are common modules in diffusion, such as LDM[3] and ResShift[1], which require a lot of computation. How can the method in this paper be extended to self-attention and MLP?\n\n\n[1] Yue, Zongsheng, Jianyi Wang, and Chen Change Loy. \"Resshift: Efficient diffusion model for image super-resolution by residual shifting.\" Advances in Neural Information Processing Systems 36 (2024).\n\n[2] Wang, Yufei, et al. \"SinSR: diffusion-based image super-resolution in a single step.\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024.\n\n[3] Rombach, Robin, et al. \"High-resolution image synthesis with latent diffusion models.\" Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2022.\n\nPlease refer to the weaknesses above." }, { "confidence": 4, "rating": 8, "review_id": "Q4uZ6xam7v", "review_text": "This work present a novel binarized diffusion model for improving the efficiency of super resolution tasks. Compared with the existing works, this work first pointed out the specific challenges of binarized DMs for SR, including the dimension mismatch and fusion difficulty of representations. Then this work present several techniques: consistent-pixel down/upsampleing, channel-shuffle fusion, and Time-step-aware redistribution function for the aforementioned challenges. Comprehensive results show that the provided binarized DMs for SR not only significantly outperform the binarized models with existing SOTA binarization methods, but also achieve floating-point level performance. And for the efficiency, the statistics of params and flops show the advantage of proposed method, and the paper also present the real inference time on edge, which seems important and encouraged in the binarization community.\n\n1. As far as I know, this is the first work to present the specific binarization method for diffusion model of SR. Since the good performance has been achieved by DMs in various SR tasks, it’s important to present novel insight to compress these models, especially considering the severe drop still exists after binarizing by existing SOTA methods.\n2. The motivation is intuitive and techniques are novelty, especially considering the features of DMs. The proposed CP-Up/down and channel shuffle are highly specified to the architecture of the diffusion models, which is novel and cannot be achieved by previous methods, including binarization function and binarized structures. And the computation is also small, allowing minor burden with significant performance improvement. And the proposed activation function also focus on the high dynamic of activation range during time-step, which is one of the most critical problem for the quantization of DMs. \n3. The proposed method achieve SOTA results in accuracy. Comprehensive comparison has been included in this paper, including SOTA binarization methods and various evaluation datasets. The results show that the proposed outperforms than previous binarized DMs for SR with significant improvements.\n4. In this paper, diverse analysis, including quantitative, statistical, and visual results are presented in detail. More important, the paper shows the efficiency evaluation based on real inference libraries and edge hardware, which is of great significance for practical application.\n\nThough it’s a good paper, some issues should be addressed.\n1. The writing and presentation of the paper should be improved, including but not limited to the grammar and description. For example, some basic knowledge about quantization, SR, and DMs seems to be summarized as a preliminaries section; and let the proposed techniques be highlighted in Figure 2.\n2. As for the efficiency, I suggest the authors present the computation more detailed, such as present the computation of each part in the whole network before and after the binarization. This will show the efficiency advantage of the proposed method much clearer.\n3. The proposed challenge I and II are insightful, but more further discussion (such as visual, quantitative, or theoretical analysis) are presented after proposing. I suggest authors do more discussion about that.\n4. Some recent binarization methods for SR [1] are suggested to be compared and some quantized DMs [2] are suggested to be discussed the differences to make the comparison more comprehensive.\n[1] Flexible Residual Binarization for Image Super-Resolution. ICML 2024\n[2] EfficientDM: Efficient Quantization-Aware Fine-Tuning of Low-Bit Diffusion Models. ICLR 2024\n\n1. Compared with the quantization of DMs for SR, can authors provide discussion about the advantage and motivation of binarization?\n2. What is the type of ARM hardware for the evaluation of inference?\n3. If the proposed method have potential generalized to more generative tasks and architectures?" }, { "confidence": 4, "rating": 7, "review_id": "vX22UUCWT5", "review_text": "The authors propose BI-DiffSR to binarize diffusion based image super-resolution (SR) model. They design a UNet architecture for the whole binarized model structure. To maintain dimension consistency, they propose two modules, CP-Down and CP-Up, which can further help transfer full-precision information. To enhance feature fusion, they propose the channel-shuffle-fusion (CS-Fusion). They also propose TaR and TaA to dynamically adjust activation distribution cross different timesteps. The authors provide extensice experiments to demonstrate the effectiveness of their proposed method.\n\nThe topic is very important and practical. Diffusion models have shown excellent performance for image super-resolution (SR). It is very practical to quantize the models before deploying them into devices. Binarization is an extreme tool to compress the SR model. Few works have been proposed to investigate such an important problem in image SR. \n\nThe authors give several insights for the specific topic. Namely, there are some key aspects in diffusion based image SR binarization, like dimension mismatch, fusion difficulty, and activation distraction. Those problems hinder the performance of binarized image SR diffusion models. The observation and analyses given in the introduction section are insightful and motivate readers well.\n\nTo alliveate the problems in binarized diffuision based SR models, the authors propose consistent-pixel-downsample (CP-Down) and consistent-pixel-upsample (CP-Up) to ensure dimensional consistency. They propose the channel-shuffle-fusion (CS-Fusion) to facilitate the fusion of different features within skip connections and suit binarized modules. They propose the timestep-aware redistribution (TaR) and timestep-aware activation function (TaA) to adjust the binarized module input and output arross different timesteps.\n\nThey provide extensive ablation study experiments (including quantitative results in Table 1 and visualization analyses in Figures 6 and 7.) to show the effects of each proposed components. Those experiments are convincing.\n\nThe authors provide comparions with SOTA methods. According to the main quantitive and visual comparisons, they show that their proposed BI-DiffSR achieves superior performance over others.\n\nThe overall writing and organization are pretty good. I think the work is well-prepared. The supplementary file further provides more details. The paper is easy to follow and they promise to release the code, which makes this work more convincing.\n\nWhen binarizing full-precision model from 32-bit to 1-bit, ideally we can reduce the parameters by 32 times. But, as shown in Table 2, the authors reduce parameters from 55.41M to 4.58M (for scale 2). There is a gap between ideal case and practical one. Please give some analyese about the reasons for this gap. Also, are there any idea to further narrow the gap?\n\nThe parameters and Ops are reduced obviously from full-precision to binary one. But the authors did not give results about inference time on real devices or give some analyses. I am curious how fast the binarized model will be.\n\nThe writing can further refine in some cases. For example, in the abstract part (Line 9-10), “… to maintain dimension consistent” should be changed to “… to maintain dimension consistency”.\n\nCan this method be applied to other diffusion models, like stable diffusion? If so, can the authors give some suggestions to binarize stable diffusion?\n\nCan we apply this binarization method to other related image restoration tasks? Like image denoising, deblurring?\n\nHow long did the authors to train the models?" }, { "confidence": 5, "rating": 5, "review_id": "sWXfiOGDuH", "review_text": "This paper introduce a novel binarized diffusion model, BI-DiffSR, for image SR. A UNet architecture optimized for binarization, channel shuffle fusion, and time-step-aware redistribution and activation functions are designed. The experimental results proved the effectiveness of the method.\n\n1. This paper is well written, nicely presented, and well organized.\n\n2. Binarized diffusion networks are promising.\n\n3. The performance improvement over other binary SR networks is significant.\n\n1. Lack of discussion with some related works[1, 2, 3, 4], in particular [1] which is also for binary SR networks. Please analyze and discuss the differences with [1,2].\n\n2. Ablation experiments are not convincing enough. Comparisons with some other activation function or fusion methods [1, 2, 3, 4] should be included.\n\n3. It is always known that diffusion models are slow. Although binarization will speed up the operation, can it achieve a better trade-off in performance and efficiency than a real-valued efficient SR network. It is suggested to compare with some efficient SR networks [5, 6, 7] in terms of Params, FLOPs, inference time and performance.\n\n> 1. Flexible Residual Binarization for Image Super-Resolution. ICML24.\n\n> 2. Q-DM: An Efficient Low-bit Quantized Diffusion Model. NIPS23. \n\n> 3. Binarized Low-light Raw Video Enhancement. ICCV23.\n\n> 4. Binarized Spectral Compressive Imaging. NIPS23.\n\n> 5. Efficient long-range attention network for image super-resolution. ECCV22.\n\n> 6. DLGSANet: lightweight dynamic local and global self-attention networks for image super-resolution. ICCV23.\n\n> 7. Feature modulation transformer: Cross-refinement of global representation via high-frequency prior for image super-resolution. ICCV23.\n\nSee Weaknesses." } ]
yXW2dCTQdi
Controlled maximal variability along with reliable performance in recurrent neural networks
Natural behaviors, even stereotyped ones, exhibit variability. Despite its role in exploring and learning, the function and neural basis of this variability is still not well understood. Given the coupling between neural activity and behavior, we ask what type of neural variability does not compromise behavioral performance. While previous studies typically curtail variability to allow for high task performance in neural networks, our approach takes the reversed perspective. We investigate how to generate maximal neural variability while at the same time having high network performance. To do so, we extend to neural activity the maximum occupancy principle (MOP) developed for behavior, and refer to this new neural principle as NeuroMOP. NeuroMOP posits that the goal of the nervous system is to maximize future action-state entropy, a reward-free, intrinsic motivation that entails creating all possible activity patterns while avoiding terminal or dangerous ones. We show that this goal can be achieved through a neural network controller that injects currents (actions) into a recurrent neural network of fixed random weights to maximize future cumulative action-state entropy. High activity variability can be induced while adhering to an energy constraint or while avoiding terminal states defined by specific neurons' activities, also in a context-dependent manner. The network solves these tasks by flexibly switching between stochastic and deterministic modes as needed and projecting noise onto a null space. Based on future maximum entropy production, NeuroMOP contributes to a novel theory of neural variability that reconciles stochastic and deterministic behaviors within a single framework.
https://openreview.net/pdf/bd4ce492257f560caead52dd6d5cf618cf2665fb.pdf
[ { "confidence": 3, "rating": 7, "review_id": "o7gkPGcWzE", "review_text": "The authors propose a principle for selecting actions to drive recurrent neural network activities which aims at maximizing the variability of the neural activity while avoiding unwanted states. They define unwanted states as states where no action is possible, and use a reinforcement learning framework to select the input and analyze the coupling between action choice and network dynamics. They apply their networks and input selection to a few tasks where maximizing the entropy within some boundaries is defined as success, and show that the network performs those tasks.\n\nThe framework is, to the best of my knowledge, original. It is also sound in its analysis, and well explained.\n\nI have two concerns, the first one is (for lack of a better word) teleological, the second more practical.\nMy first concern is that the authors use the word performance often and refer to their networks as solving the task. But from what I can read, the \"task\" of filling as much of the space (MOP) as possible was never given to the agent/input controller. Thus, if the task of the network is to maximize its entropy while remaining on a bounded region, and the R network is not taught to maximize entropy, we can hardly argue that the R network failed. I get that the point that MOP can be useful for some problems focusing on exploration, but it is a bit odd to talk about solving tasks and performances when such performance was not told to the agent, but rather an agent was build for that task (or for another one in the case of R). Please correct me if I misunderstood.\nMy second concern is that an agent with a binary value function which chooses random actions (value 1) except if those have a punishment (value 0). Such value might be easy to learn (it only requires learning a boundary, which is smooth in the problems presented), and if the state space of the actions is large, random choices could be very close to MOP (because a random action sequence is likely to go through many states). While this is not necessarily true, it is worth checking, as it would make the whole MOP less impactful.\n\nTechnical remarks:\n- The addition of stochasticity to the R network is a bit tricky, because the MOP agent never had the problem that it might \"accidentally\" jump into the terminal state if it did not want to. Thus, for \"survival\" it might be better to simply stay in some very small region that is as far as possible from the terminal states, because there we find the smallest chance of accidentally going to the wrong region. A better option would be to give R a random action and then ask if it wants to take it. But I suspect that this would give a high variability.\n\nMinor issues about literature and references\n- When the authors mention that usually recurrent neural networks tend to have neurons with saturation, it seems like an unfair comparison. A network trained with a specific task do not have an incentive to maximize the number of states, but this could be added to the loss function (if there is one explicitly) or simply enforced by some intrinsic plasticity rule, as some works in reservoir computing have done (both in Echo State Networks for machine learning and biological models such as SORN). Also, it might simply be the case that for a given task it is better to be close to the saturation.\n- In the discussion the authors rightfully mention the variability found in songbirds. But they do not note that it agrees with the works that they mention in the introduction (ref 29-35) which use variability only during learning, and that they use as a current limitation motivating their work. It would seem natural from a neuroscience point of view that variability is suppressed during courting by some executive brain area, rather than assuming that the area that generates the song naturally knows when to change behaviors. \n- The argument that terminal states are those where the agent has no available actions is a bit limiting. Any task that corresponds to reaching a goal (for example a location) leads by definition to a reduced action space, at least in practice (the agent would not leave that position). A note about this would be beneficial.\n\nFinally, the authors mention that MOP might take deterministic actions to increase entropy later. This would be, in my opinion, an important contribution, but I haven't seen this being done explicitly in the current paper. Why not try to have two circles connected by a very thin line, to showcase this?\n\nThere is a edge case that bothers me. If we consider the perfect value function, all action paths within the boundary have a limited entropy, but an action path that leads to leave the boundary has technically infinite entropy (it is never taken, thus its probability would be zero, hence infinite entropy). If this is correct, how does the neural network for the values avoid this problem?" }, { "confidence": 3, "rating": 5, "review_id": "NyCR67SbkT", "review_text": "In natural behaviors, there’s usually variability despite being able to perform tasks with high performance. This paper aims to understand whether it’s possible for neural networks to have high variability while maintaining high task performance and being able to switch to deterministic behavior modes when needed. \nThis paper uses an RNN with fixed weights to model the state of the environment and how it’s affected by the action of the agent. The agent is modeled by a controller which aims at optimizing the occupancy of the action-state space. This is achieved by having a reward that increases when the agent selects an action that is less likely with the current policy. The optimal value function, and in turn the policy, is approximated by using a single hidden layer feedforward NN.\n\nThe paper presents a reward function that maximizes the action path entropy, and provides interesting examples of how this network performs in three different example tasks. The writing is clear and easy to follow.\n\n1.\tThe tasks are mostly limited to setting several terminating states, so that the MOP network learns to avoid the terminating states while maximizing the entropy. I wonder how general this type of tasks is, or how more interesting RL tasks may or may not be formulated this way. \n2.\tThe motivation of this paper is unclear to me. The authors aim to show that NNs can have high variability with good performance, in order to match natural behaviors, and to propose possible mechanisms for neural variability. However, there is no comparison with experiments to show how well the behavior of the NN matches natural behaviors, or to show how the proposed reward may be superior. The authors also do not explain the generation of neural variability, but instead directly enforce variability in the MOP network.\n\n1.\tHow does the structure (the number of hidden layer neurons, depth) of the NN that serves as value function approximator, affect the results?\n2.\tIn Fig 3(c), why is the action entropy low in the lower right and upper left corners but not the other two corners? one would naively expect them to be symmetric." }, { "confidence": 4, "rating": 6, "review_id": "b8TwYpSVEY", "review_text": "This paper applies the maximum occupancy principle (MOP) -- previously introduced as a normative theory of behavioural variability -- to recurrent neural networks, thereby proposing MOP as a normative theory of neural variability. The MOP postulates that an agent seeks to maximize future occupancy of its state-action space. A key insight of the previously published MOP paper (ref 42) was that an MOP-following agent naturally learns “good” behaviour by actively avoiding terminal states (e.g. death) as those imply very reduced future state occupancy. This effect is demonstrated again here by showing that MOP-following RNNs can be made to avoid specific activity patterns, but remain maximally variable otherwise. In terms of methods, the main challenge here was to approximate the MOP value function for nonlinear RNNs -- the sole determinant of the optimal policy. The authors do so by training a NN value function approximator on a regression objective derived from the self-consistency (Bellman) equation that the value function must satisfy. The framework is then applied to a few toy setups, including a context-dependent drawing task. Some technical limitations are discussed.\n\nThe idea of applying MOP to RNNs is potentially interesting, as it provides a new normative theory of neural variability that will be interesting to confront to neural data -- this paper provides some technical foundations for investigating this hypothesis further. The paper is technically well executed.\n\nThere is some, but not an awful lot of, added value relative to ref 42. The MOP is a creative new framework, but the idea that biological agents learn what _not_ to do _instead of_ learning what to do seems a hard sell. The drawing tasks of Figs 3+4 seem carefully designed to demonstrate that MOP-following networks can achieve some “positive” functionality by exclusion, but I have a hard time imagining how the framework would scale to even simple control tasks like swinging a pendulum up; many such tasks are defined by what the agent must do, and many suboptimal states are actually not at all absorbing / terminal. I think the paper probably ought to discuss these limitations in more depth.\n\nThere is at least one other normative theory of neural variability that ought to be mentioned in the intro more explicitly: sampling-based probabilistic inference, where variability represents uncertainty. (Ref 26 is cited for \"nonlinear network interactions leading to variable activity patterns\" but has nothing to do with networks. Echeveste et al 2020 by the same group might be more appropriate in this context.)\n\nIn summary, I think this idea is potentially interesting -- I view it as a putatively useful theoretical framework for studying how brains learn from bad outcomes (which engage a very different system from the brain's dopaminergic reward system). However, the paper as it currently stands is rather incremental and perhaps not of broad appeal to the NeurIPS community; I would strongly encourage the authors to explore the ramifications of the neural-MOP framework for neuroscience, articulating predictions for neural variability in specific setups where neural data is available for confrontation.\n\nMinor typos I picked up:\n- sentence on l.54 (“This theory frames...”) is grammatically weird.\n- l.128: inevitably constraint → constrain\n- l.135: of parameters → with parameters\n- l.178: a terminal states → state" }, { "confidence": 4, "rating": 7, "review_id": "NOimxQDLCK", "review_text": "This paper proposes a mechanism to induce variability in \"reservoir\" recurrent neural networks without impinging upon task performance, by maximising the cumulative entropy of future states and actions/behaviors. These actions are provided by a controller network to the reservoir as input currents. The authors demonstrate through experiments that the induced variability does not come at the cost of adhering to constraints on energy or specific neuronal activities, or task performance (there is no explicit reward in these tasks for the proposed framework, and so this is measured by the survival time, i.e., timesteps until a terminal state is reached or a constraint is violated). Comparisons with networks without the input current modulation or those with explicit rewards show that the former are unable to properly satisfy task constraints, while the latter find overly conservative or \"risk-averse\" solutions that suppress variability. The demonstrations also show that the proposed framework leads to networks switching between more deterministic modes of computation (lower action entropy) near terminal states/constraints and more stochastic (higher action entropy) ones otherwise. Overall, this paper provides a novel perspective on how neural variability could be maximized while still allowing for accurate performance and avoiding terminal states, especially when an explicit reward function is unavailable or undesirable.\n\n1. The paper is well-motivated, clear, and provides a unique perspective on how there could be controlled variability in a system without negatively affecting task performance. The figures and overall presentation are good and clearly demonstrate the validity of the central claims.\n2. The experimental results include specific controls for the proposed mechanism – the authors show results for networks without any input current modulation and also networks with explicit constraints imposed by a reward function.\n3. In the appendix, the authors show that their central claims are largely valid even when there are other sources of variability such as intrinsic noise in the networks.\n4. While the tasks are simplistic, they are well-designed and quite interpretable, allowing the claims to be validated easily through the visualizations.\n\n1. This is a limitation acknowledged by the authors, but it seems like the computational complexity for the framework is quite high, so it is difficult to evaluate how this would scale to more complex tasks or multi-task settings.\n2. To my knowledge, and perhaps I have missed this, but the authors have not provided clear connections to the biological inspirations for the proposed mechanism, such as perhaps neuromodulatory mechanisms, or comments on its biological realism. Could the authors elaborate on this and are there any testable predictions for this model of neural variability?\n3. While the simplicity of the tasks is an asset, it would also be important to see what happens when there is a greater diversity among tasks in a multi-task setting (see Yang et al. [1]), and when performance is not only linked to constraint satisfaction or \"survival\". For example, would this framework impede performance when one of the tasks explicitly requires less variability as is perhaps the case in a memory task (see Yang et al. [1] again for examples)?\n4. This is a minor point, and a suggestion rather than a weakness, but there are some interesting recent works that could be mentioned to strengthen the background:\n 1. Takasu & Aoyagi [2] discussed an input current modulation mechanism and how it affects the Lyapunov exponents of reservoir networks' dynamics – specifically, suppressing chaos and ensuring networks are at the edge of chaos to enable effective information processing (and is thus related to the variability in these networks; also related to [59] from the paper). It would be interesting to briefly compare/contrast the proposed mechanism/goals to that proposed in [2].\n 2. In lines 33-37, the authors discuss works where internal synaptic noise is proposed as a mechanism for neural variability, and mention that some works use this assumption to \"describe variability during spontaneous activity–in the absence of sensory stimuli\". Works such as Asabuki & Fukai [3] and Krishna et al. [4], where such a mechanism is assumed and used to describe properties of spontaneous activity, could be discussed (in addition to [18, 20] from the paper) to provide a better idea of the implications of such mechanisms.\n\n**References:**\n1. Yang et al. “Task representations in neural networks trained to perform many cognitive tasks.” Nature neuroscience vol. 22,2 (2019): 297-306.\n2. Takasu & Aoyagi. “Suppression of chaos in a partially driven recurrent neural network.” Phys. Rev. Research 6, 013172 (2024).\n3. Asabuki & Fukai. “Learning rules for cortical-like spontaneous replay of an internal model.” bioRxiv (2023): 2023-02.\n4. Krishna et al. “Sufficient conditions for offline reactivation in recurrent neural networks.” The Twelfth International Conference on Learning Representations (2024).\n\nSee the Weaknesses section." } ]
yWq89o19wf
User-Creator Feature Polarization in Recommender Systems with Dual Influence
Recommender systems serve the dual purpose of presenting relevant content to users and helping content creators reach their target audience. The dual nature of these systems naturally influences both users and creators: users' preferences are affected by the items they are recommended, while creators may be incentivized to alter their content to attract more users. We define a model, called user-creator feature dynamics, to capture the dual influence of recommender systems. We prove that a recommender system with dual influence is guaranteed to polarize, causing diversity loss in the system. We then investigate, both theoretically and empirically, approaches for mitigating polarization and promoting diversity in recommender systems. Unexpectedly, we find that common diversity-promoting approaches do not work in the presence of dual influence, while relevancy-optimizing methods like top-$k$ truncation can prevent polarization and improve diversity of the system.
https://openreview.net/pdf/85ebe96bf9fedbc2fcdde28d66e0c8df3b4c3061.pdf
[ { "confidence": 5, "rating": 7, "review_id": "lOaIhR0ul3", "review_text": "This paper models dynamics of both users and creators in a recommender system. The user features shift in the direction of the content recommended to them. The creator dynamics are strategically motivated i.e. they try to align content to attract their audience, to increase profit. \n\nThe authors then provide sufficient conditions for this model of dual dynamics to converge to polarization under a natural assumption that each creator has some non zero probability of being recommended to every user.\n\nThe paper then discusses four real world recommendation designs, and whether they cause polarization or multiple clusters etc. They also provide results on synthetic and Movielens data complementing theory results and show that certain recommender designs do lead to polarization vs diverse clusters.\n\n- This paper is the first to consider dynamics of both users and creators in a recommender systems and provides sufficient analytic conditions for polarization\n- They apply this theory to 4 natural designs: (1) Top-k, (2) Truncation, (3)Diversity boosting and (4) Lower bounding probability. They show that rules (3, 4) lead to polarization and rule (1) leads to diverse clusters. This section is particularly insightful.\n- The experimental evaluation with synthetic and Movielens data is also insightful and complements the theory. The softmax probability leads to diminishing creator and recommendation diversity over time. They also study top-k probability and show how lower k is better for higher creator diversity, recommendation relevance.\n\nA criticism I had while reading the paper are gaps in literature for the discussion on dynamics in recommender systems. In addition to [Eilat & Rosenfeld] referenced in the introduction, [1,2,3,4,5,6] consider creator dynamics in recommender systems. These works assume static user features and provide results on content at equilibrium and user welfare. In the context of these works, it would be beneficial to highlight how your work is the first to consider both creator and user dynamics.\n\n[1] A Game-Theoretic Approach to Recommendation Systems with Strategic Content Providers (Ben-Porat and Tennenholtz)\n\n[2] Supply-side equilibria in recommender systems (Jagadeesan et al)\n\n[3] How Bad is Top-k Recommendation under Competing Content Creators? (Yao et al)\n\n[4] Modeling content creator incentives on algorithm-curated platforms (Hron et al)\n\n[5] Producers Equilibria and Dynamics in Engagement-Driven Recommender Systems (Acharya et al)\n\n[6] User Welfare Optimization in Recommender Systems with Competing Content Creators (Yao et al)\n\n- I understand the motivation for the form of user update in equation (3) . In each recommendation step an item is recommended and the user preference shifts in that direction, this is like in [Dean and Morgenstern]. Can you motivate the update in Eq (4), is this myopically optimal for the creator to do, and how does it generalize [Eilat & Rosenfeld]?\n- Minor Typo? For Figure 6, larger $\\rho$ seems to lead to higher creator diversity (green curve)." }, { "confidence": 3, "rating": 7, "review_id": "klRQoAUejv", "review_text": "The paper explores the dynamics between users and content creators in recommender systems, highlighting the dual influence where users’ preferences are shaped by recommendations and creators modify their content to align with what is more likely to be recommended. The study defines a model called user-creator feature dynamics to capture these interactions, demonstrating that such systems are prone to polarization, resulting in a loss of diversity. The paper then examines various approaches to mitigate polarization and improve diversity, finding that relevancy-optimizing methods, such as top-k recommendations, can prevent polarization more effectively than traditional diversity-promoting approaches.\n\nThe paper provides an interesting perspective by addressing the mutual influence between users and creators in recommender systems. The theoretical results and experimental validation using both synthetic and real-world data look credible. The writing is overall easy to follow.\n\n1. There are two lines of works focusing on modeling content creator dynamics and user preference evolving dynamics that are neglected by the authors. I listed several representative works and it would be great to include a comprehensive literature review regarding these works in the related work section.\n\n2. One of your main observation (larger $\\beta$ leads to higher creator diversity and alleviated polarization) is actually pointed out in [1] under a similar model, where content creators compete for a fixed user population (see section 3.2 in [1]). And another main observation in section 5.3 that smaller $k$ improves diversity does not echo the result in [2], which shows that larger $k$ improves the total creator utilities. It would be better to include some detailed discussions regarding these two works. \n\n3. The user/creator preference updating dynamics need more justifications and empirical evidence.\n\n4. The dynamical model makes some sense to me, but it would be more interesting to understand whether the observations still hold in the presence of noise. If the noisy version is hard to analyze theoretically, additional simulation results could also be valuable.\n\n\n[1]. Modeling Content Creator Incentives on Algorithm-Curated Platforms\n[2]. How Bad is Top-K Recommendation under Competing Content Creators?\n[3]. Online recommendations for agents with discounted adaptive preferences\n[4]. Recommender systems as dynamical systems: Interactions with viewers and creators\n[5]. Learning from a learning user for optimal recommendations\n[6]. Supply-side equilibria in recommender systems\n\n1. In theorem 3.3, how does the convergence rate depending on the temperature parameter $\\beta$? I ask this because when $\\beta\\rightarrow +\\infty$, the softmax recommendation strategy is equivalent to the top-1 recommendation strategy. In this case, Proposition 4.2 predicts that the top-1 recommendation lead to $n$ clusters rather than bi-polarization, which seems to contradict theorem 3.3.\n2. I do not fully get why the specific forms of function $f$ and $g$ do not affect the analysis. Is it because your main results only depend on the range of $f$ and $g$?\n3. In the experiments, the range of $\\beta$ is quite conservative. I'm curious about the result under a larger range of $\\beta$." }, { "confidence": 4, "rating": 6, "review_id": "y6WCTXsGmt", "review_text": "This paper studies how recommendations become polarized over the long run when user and creator features dynamically change over time. The authors theoretically prove that, under the assumption that every creator can be recommended to every user with some non-zero probability, recommender systems will eventually converge to polarization. They also simulate some real-world models, including top-k recommendation, truncation, diversity boosting, and lower-bounding probabilities in a long-term setting. The key observation is that top-k recommendation (i.e., only recommending top-k items to users) can reduce polarization to some extent, while existing diversity-boosting methods will worsen polarization when user/creator features dynamically change over time in the system.\n\n1. The authors provide both theoretical and empirical evidence showing that relevance-focused recommendations (as opposed to diversity-focused recommendations), which harm diversity in a static setting, are actually effective in improving diversity in the long term. This observation is somewhat counter-intuitive to previous beliefs, making it very interesting.\n2. The authors conducted simulations with both synthetic data and real-world data (i.e., Movielens) using four diversity and relevance-related measures. Additionally, the analysis with sensitivity parameters in softmax is insightful and supports the authors' main claim.\n3. Studying diversity in a dynamic setting is novel.\n\n1. Despite the novelty and interestingness, I have concerns about the key assumptions of the theoretical and empirical analyses. The assumption that all items can be recommended to users is not realistic. In practice, almost all recommender systems rely on top-k recommendations for either effectiveness or resource constraints like screen size. For example, on platforms like Netflix or Amazon, customers can only see a certain number of items on the webpage (i.e., p=0 for items that users can't see). Even if they can scroll down and the system continually recommends new items, they cannot physically see all items in the system. Thus, I believe the top-k setting is the most realistic and natural for real-world scenarios, and this seems like a hole in the authors' analyses. In this sense, the measures for empirical analysis should also only consider top-k items, not all items.\n2. For the real-world designs, it would be more extensive if users included trustworthiness-aware recommender systems that consider dynamic/continual settings. For example, [1] consider performance difference between two different user groups when the user/item features are continually updated over time in the systems.\n3. For the analysis with Movielens, considering the interaction timestamp in the simulation would more accurately reflect real-world scenarios, for example, for determining the true labels.\n\n[1] Yoo et al., Ensuring User-side Fairness in Dynamic Recommender Systems, WWW'24\n\n1. Please address the points I raised in Weaknesses.\n2. (Minor) Are both consensus and bi-polarization conceptually polarization?\n3. (Minor) How are the initial user/creator embeddings initialized in the Movielens experiment?" } ]
yWSxjlFsmX
Is Mamba Compatible with Trajectory Optimization in Offline Reinforcement Learning?
Transformer-based trajectory optimization methods have demonstrated exceptional performance in offline Reinforcement Learning (offline RL). Yet, it poses challenges due to substantial parameter size and limited scalability, which is particularly critical in sequential decision-making scenarios where resources are constrained such as in robots and drones with limited computational power. Mamba, a promising new linear-time sequence model, offers performance on par with transformers while delivering substantially fewer parameters on long sequences. As it remains unclear whether Mamba is compatible with trajectory optimization, this work aims to conduct comprehensive experiments to explore the potential of Decision Mamba (dubbed DeMa) in offline RL from the aspect of data structures and essential components with the following insights: (1) Long sequences impose a significant computational burden without contributing to performance improvements since DeMa's focus on sequences diminishes approximately exponentially. Consequently, we introduce a Transformer-like DeMa as opposed to an RNN-like DeMa. (2) For the components of DeMa, we identify the hidden attention mechanism as a critical factor in its success, which can also work well with other residual structures and does not require position embedding. Extensive evaluations demonstrate that our specially designed DeMa is compatible with trajectory optimization and surpasses previous methods, outperforming Decision Transformer (DT) with higher performance while using 30\% fewer parameters in Atari, and exceeding DT with only a quarter of the parameters in MuJoCo.
https://openreview.net/pdf/e8f05bc8b78365623dc8f45e047f65b46390a923.pdf
[ { "confidence": 3, "rating": 6, "review_id": "m4uv2OLFQ0", "review_text": "This paper comprehensively investigates the possibility of leveraging Mamba for trajectory learning. The authors take Decision Mamba as a playground and analyse the performance of this model over trajectory learning scenarios (gym/mujoco) from several aspects. A group of conclusions are attained through rigorous experiments, which is solid and potentially valuable for further researches realted with Mamba.\n\n1. Novelty: given that Mamba is still at its exploratory stage, this paper positively probes Mamba's potential for tranjectory learning, with surprising results that with some specific pre-conditions, Mamba is more suited than Transformer.\n\n1. Most discoveries in this paper have been implicitly discussed for several months within the community, while it is firstly presented officially in this paper. Besides, these dicoveries lean to be emparical evidence, which is relatively shallow. This would make this paper's technical contribution weak. I would appreciate the authors if they could provide more in-depth explanation over these discoveries, in particular: 1) Transfomer-like model favors short sequence. 2) The significant role of the hidden attention. \n\n2. Although the experimental results are solid, I found that this paper is more suitable for Benchmark Track, since the technical novelty revolves around benchmarking Decision Mamba. \n\n3. Figure 1 (the title and the pic) should be improved. For now, it confuses me, especially the corresponding relationship between the text content (title) and the illustration.\n\n3. Minor: line 295: may more suitable -> may be more suitable\n\nsee Weaknesses." }, { "confidence": 3, "rating": 6, "review_id": "OLd6YCKqkx", "review_text": "This paper investigates how Mamba perform in trajectory optimization in offline RL with ablation analysis on mamba's data input structures and architectural structures and shows Mamba DT can achieve SOTA performance with less parameters.\n\n1. The paper writing is good, the visualizations look good.\n2. The input concatenation experiments provides useful practical insight also for other sequence-based decision-making models \n3. The paper provides a detailed analysis of how various components of Mamba, such as the hidden attention mechanism and different residual structures, influence performance.\n\n1. Finding 3 is not very surprising on the tested MDP environment, since they by definition should focus only on recent states. It will be interesting to explore how this mechanism might perform in environments with long-term dependencies where the Markov property does not hold strictly.\n2. Only tested on standard Atari and mujoco tasks. How would mamba perform on tasks that requires long horizon planning skills? such as maze navigation or tasks with delayed rewards.\n\nplease see weakness." }, { "confidence": 3, "rating": 5, "review_id": "IHkKHwLq2y", "review_text": "The work introduces Decision Mamba (DeMa) to address the challenges in offline RL posed by the large parameter size and limited scalability of Transformer-based methods. DeMa aims to achieve similar performance to Transformers with significantly fewer parameters. DeMa surpasses the DT with significantly fewer parameters in the benchmarks.\n\n1. Extensive evaluations demonstrate the effectiveness of DeMa, highlighting its superior performance and efficiency compared to existing methods.\n\n2. DeMa provides a novel solution to the parameter size and scalability issues in trajectory optimization.\n\n1. Some symbols are not defined before use.\n\n2. This paper seems to have little relation to RL and appears more like a method applicable to all trajectory optimization.\n\n3. There is too little discussion on the relationship to RL in sections 3.2 and 3.3.\n\n1. What is the definition of $L_{MSE/CE}$ and $_{-K:t}$?\n\n2. Is DeMa applicable to all trajectory optimization methods?" } ]
yW3tlSwusb
Accelerating ERM for data-driven algorithm design using output-sensitive techniques
Data-driven algorithm design is a promising, learning-based approach for beyond worst-case analysis of algorithms with tunable parameters. An important open problem is the design of computationally efficient data-driven algorithms for combinatorial algorithm families with multiple parameters. As one fixes the problem instance and varies the parameters, the “dual” loss function typically has a piecewise-decomposable structure, i.e. is well-behaved except at certain sharp transition boundaries. Motivated by prior empirical work, we initiate the study of techniques to develop efficient ERM learning algorithms for data-driven algorithm design by enumerating the pieces of the sum dual loss functions for a collection of problem instances. The running time of our approach scales with the actual number of pieces that appear as opposed to worst case upper bounds on the number of pieces. Our approach involves two novel ingredients – an output-sensitive algorithm for enumerating polytopes induced by a set of hyperplanes using tools from computational geometry, and an execution graph which compactly represents all the states the algorithm could attain for all possible parameter values. We illustrate our techniques by giving algorithms for pricing problems, linkage-based clustering and dynamic-programming based sequence alignment.
https://openreview.net/pdf/e63c1b5cb2a01f0376fee17da54317b5c6b743cc.pdf
[ { "confidence": 2, "rating": 5, "review_id": "BXcc4BUZxQ", "review_text": "This paper addresses the problem of learning optimal parameters for data-driven algorithm design. A characteristic of the problem is that the dual loss function, which measures the performance of an algorithm as a function of parameters, is discontinuous. Nevertheless, the dual loss is typically piecewise structured (constant, linear, etc.) with linear boundaries. Thus, roughly speaking, the problem of finding optimal parameters reduces to exploring polytopic cells partitioned by boundary hyperplanes. \n\nThe main contribution is a cell enumeration algorithm that runs in output-polynomial time. The algorithm can be seen as a breadth-first search on a cell adjacency graph, where the enumeration of neighbors is done in an output-sensitive manner based on Clarkson's algorithm. The resulting output-sensitive complexity can be significantly better than the worst-case, as demonstrated in Example 1.\n\nThe authors then instantiate the ERM method based on the cell enumeration for linkage-based clustering and DP-based sequence alignment. The applications involve designing execution graphs, which originate from the execution tree of [BDL20]. Combining appropriate problem-specific execution graphs with the cell enumeration leads to the improved time complexity of ERM in several data-driven algorithm design problems, as in Table 1.\n\n1. The paper addresses the important problem of optimizing algorithm parameters in data-driven algorithm design. \n2. The theoretical results given in Table 1 appear strong compared with previous ones. \n3. The output-sensitive cell enumeration might be of independent interest in the context of computational geometry.\n\n1. The paper would have been more appealing if implementations of the proposed methods and experimental results were provided.\n2. The paper is somewhat dense and it is not easy to follow the technical details.\n\n1. I would like to know more intuition about how AugmentedClarkson differs from the original Clarkson and why it is important. \n2. While the paper focuses on linear boundaries, some studies on data-driven algorithm design consider parameter spaces partitioned by polynomials:\n\nhttps://proceedings.neurips.cc/paper_files/paper/2022/hash/db2cbf43a349bc866111e791b58c7bf4-Abstract-Conference.html\n\nhttps://proceedings.mlr.press/v178/bartlett22a.html\n\nhttps://proceedings.mlr.press/v206/sakaue23a.html\n\nIs there a possibility of applying similar enumeration ideas to such situations?" }, { "confidence": 4, "rating": 6, "review_id": "4EquHKAZfg", "review_text": "In data-driven algorithm design, we are given a collection of problem instances sampled from an unknown distribution, and a family of algorithms for solving the problem, typically parameterized by a real-valued multivariate parameter. The goal is to find a setting of parameters such that the performance of the algorithm they parameterize is close to optimal among the family of algorithms, in expectation over the distribution of instances. Most prior work is firstly focused on the generalization aspect, i.e., on showing that a small number of samples suffices (in the sample complexity sense) for finding approximately optimal parameters for the distribution (these are the ERM parameters for the given sample of instance), and thus the family of algorithms is \"learnable\". It then (sometimes) proceeds to develop an efficient algorithm for finding those ERM parameters, based on the structure used to prove the generalization bound.\n\nThis paper focuses more systematically on the ERM efficiency aspect. To this end, it starts with a common theme in many prior works on data-driven algorithms, that had been abstracted out and formulated in a generalized form in Balcan et al. (STOC 2021): for any fixed problem instance, the function that maps a setting of parameters to utility (of invoking their associated algorithm on that instance) admits a simple piecewise structure. Say, there is a small number of \"simple\" boundary functions (say, linear thresholds) that induce a partition of the parameter space R^d such that the utility function restricted to each piece is \"simple\" (say, constant). This is helpful in bounding the VC dimension of the utility functions and thus proving generalization bounds, and also potentially for navigating the parameter space efficiently to find the ERM parameters.\n\nThe novelty in this paper is an attempt to give a more systematic recipe for the second part (navigating the piecewise structure for efficient ERM), with two main advantages -- (1) creating a unified framework that takes care of some parts of the ERM procedure in a general way, thus restricting the portion that needs to be figured out per each problem individually, and (2) obtaining algorithms whose running time depends on the actual number of pieces in the given instances (\"output-sensitive\") rather than worst-case number of pieces. The per-problem part to figure out is a subroutine that, given a problem instance and parameter setting p, returns a list of candidate boundary functions for the piece that contains p, and this subroutine depends on the specific problem in question. The unified part of the framework uses this subroutine to search through the pieces in an \"output-sensitive\" running time.\n\n__Post-rebuttal__: I appreciate the authors' elaborations on the technical content, and the conceptual aspect of the paper. I have raised my score to support acceptance.\n\nThe strength of this paper is that the matter of efficient ERM in data-driven algorithms indeed merits its own systematic study rather than being left as an afterthought of the generalization bounds.\n\nThe main weakness is that the end result isn't very strong: the framework is restricted to linear boundary functions and (more disconcertingly) to a constant number of parameters, and for the most part does not yield improved running times in the worst case, but a different notion of efficiency (output sensitivity). It tends more to systematically organizing ideas that have appeared in the literature in one form or another and less to introducing new algorithmic insights or techniques. I also feel that the presentation and writing could be too opaque for a wide readership like that of NeurIPS.\n\nN/A" }, { "confidence": 2, "rating": 7, "review_id": "O9hqPvdbWo", "review_text": "The paper explores computational aspects of implementing ERM in data-driven algorithm design. \n\nThe paper contributes an efficient algorithm to ennumerate cells induced by a collection of hyperplanes. The paper then shows how to utilize this as a subprocedure to solve ERM problems for algorithm design, focusing on linkage-based clustering, sequence alignment, and two-part tariff pricing.\n\nOne of the main interesting things I find about this paper is that the runtime of the ERM implementations is instance dependent, and specifically depends on the number of sum dual class loss function pieces. The paper comments that their runtime bounds imply improvements over prior work in the worst-case R but also can be faster for \"typical\" R.\n\nThe paper is well-written and easy to follow. The paper discusses relevant background and related work.\n\nTo what extent is the approach generalizable to other data-driven algorithm design problems? Is there a generic principle or a general characterization of the problems for which this approach can be utilized?\n\nSee questions above." } ]
yVzWlFhpRW
Excluding the Irrelevant: Focusing Reinforcement Learning through Continuous Action Masking
Continuous action spaces in reinforcement learning (RL) are commonly defined as multidimensional intervals. While intervals usually reflect the action boundaries for tasks well, they can be challenging for learning because the typically large global action space leads to frequent exploration of irrelevant actions. Yet, little task knowledge can be sufficient to identify significantly smaller state-specific sets of relevant actions. Focusing learning on these relevant actions can significantly improve training efficiency and effectiveness. In this paper, we propose to focus learning on the set of relevant actions and introduce three continuous action masking methods for exactly mapping the action space to the state-dependent set of relevant actions. Thus, our methods ensure that only relevant actions are executed, enhancing the predictability of the RL agent and enabling its use in safety-critical applications. We further derive the implications of the proposed methods on the policy gradient. Using proximal policy optimization ( PPO), we evaluate our methods on four control tasks, where the relevant action set is computed based on the system dynamics and a relevant state set. Our experiments show that the three action masking methods achieve higher final rewards and converge faster than the baseline without action masking.
https://openreview.net/pdf/8a1eecafd6f9a1b8919878b8b034860552119122.pdf
[ { "confidence": 3, "rating": 6, "review_id": "mQ5kL4bUVr", "review_text": "Learn a state specific mask for actions. Rather than simply a state specific interval, extend the action mask to different convex set representations. Then, derive a policy gradient for each of these masking schemes. The masking schemes are ray masks, hypercube transform mask and distributional masks. Applies action masking to seeker and quadrotor tasks and shows that this action masking improves performance.\n\nThe proposed action masking covers a wide range of possible action mask definitions.\n\nThe derived policy gradients are relatively straightforward given the definitions of the action boundaries.\n\nThe derivations appear to be sound when applied empirically.\n\nIt is not clear how easy it is to recover the action masking criteria, especially under the more complex generator or distributional schemes, and it seems like this would be rare\n\nThe experiments are not particularly convincing because they all follow similar control tasks, but it also seems like these are the only tasks for which the action mask could be easily defined.\n\nIt is not clear why related work is not in a separate section, rather than subsection. There does not appear to be a special connection to the introduction.\n\nIt isn't obvious that if G is square and non-singular, that this does not restrict the space of possible relevant action sets, since this would ensure that the hypercube space had an invertible, i.e. one to one, mapping between itself and the action distribution. It seems like many to one would be preferred if the space of the zonotope's hypercube was higher dimension than the action set." }, { "confidence": 2, "rating": 5, "review_id": "iWoCR7wlWj", "review_text": "The paper addresses challenges in RL with continuous action spaces, typically defined as interval sets. These spaces often lead to inefficient exploration due to irrelevant actions. The authors propose three continuous action masking methods to focus learning on relevant actions based on current state, improving predictability and suitability for safety-critical applications. They analyze the implications on policy gradients and evaluate performance using PPO across three control tasks. Results show higher final rewards and faster convergence compared to baseline methods without action masking.\n\n**Originality**\n- The paper presents a unique perspective on action spaces by utilizing the relevance of action utility in tasks to improve performance. Conventional methods are limited to discrete domains (tasks) so applying their methods to continuous environments was interesting to see.\n\n**Significance**\n- The proposed approach has practical implications, especially in complex environments where distinguishing between relevant and irrelevant actions is crucial. Regardless of the coverage of baseline, their methods significantly outperform it establishing the state of the art performance.\n\n**Reinforcement Learning with Policy Gradients (Section 2.1)**\n- L84: \"r →: S × A\" appears incorrect.\n \n**Continuous Action Masking (Section 3)**\n- Assumption 1: Clarify the definition of action relevance.\n \n**Ray Mask (Section 3.1)**\n- L131: Need proof that g(a) is bijective.\n\n\n**Generator Mask (Section 3.2)**\n- Why is A(s) suddenly state-dependent? Provide motivation and further description.\n- In Proposition 2's proof, the matrix multiplication seems infeasible due to mismatched dimensions (C is N x 1 and Ga results in P x 1).\n\n\n**Experiment (Section 4)**\n- Justify the rationale behind the design choices for action relevance in each environment.\n- Compare the chosen action relevance approach to other relevant action settings.\n \n**Results (Section 4.2)**\n- Why compare to a standard PPO baseline and not to prior relevant works?\n- Include qualitative results to validate the proposed methods.\n\nSee weaknesses." }, { "confidence": 4, "rating": 6, "review_id": "MN8IK925Uk", "review_text": "This paper discusses methods for action masking in continuous action spaces to improve convergence stability and sample efficiency in reinforcement learning. The paper introduces three methods for action masking with convex relevant action sets, proves their convergence, and experimentally verifies their effects.\n\nThis paper is excellently written, defines a clear and well-motivated goal, and describes three intuitive and theoretically grounded methods to achieve that goal within well-defined and clearly stated limitations.\n\nTo my knowledge these approaches are novel (though the distributional mask I suspect has been used as a one-off solution in prior work as it is conceptually very simple), and their definition and analysis are nontrivial.\n\nThis paper is pretty solid overall, and I have few major complaints.\n\nThe one significant issue I see is that I think the distributional mask algorithm is off-policy by nature, meaning it's use with on-policy methods like PPO is biased and will cause performance loss or divergence. This may explain the observed underperformance of this masking method in two of the three experimental tasks, and while the algorithm can clearly converge in some cases it seems like a major issue with that particular mask in the context of PPO (off-policy algorithms could use it without issue, but those are left to future work here) that should be noted, or the mask omitted from this paper and left to future off-policy methods.\n\nBeyond that, the experimental evaluation is relatively simple (though I think it is sufficient to validate these algorithms), and more challenging tasks would be useful to demonstrate the limitations of these masking methods. That said, the paper makes it clear that defining a suitable convex relevant action set is a manual process and can be challenging (this is okay as a limitation), so it is understandable why such stress tests are not performed. If there was a way to increase difficulty without major manual action set definition work it would strengthen the evaluation to include it.\n\nI have a few other minor issues and questions noted below, but overall this is a paper that is clear in its goals and describes methods that achieve them, validated to a reasonable standard of theoretical and experimental evidence. There's more that could be done on this topic, but the contribution of this paper is significant on its own, so I'm inclined to recommend acceptance (particularly if something is done to address my concern about the distributional mask above).\n\n-Is assumption 1 (relevant action set is convex) reasonable in most cases? I can imagine disjoint action subsets being relevant in many cases- for example, a self-driving car that needs to steer either left or right to avoid collision, but not go straight ahead or backwards.\n\n-I'm not sure it's actually necessary to compute the policy gradient across the action mask (with the exception of the distributional mask). Once an action is sampled from the policy, the mapping to the relevant action set can simply be treated as part of the environment/transition function which the policy can learn to manipulate without gradient flow. Does this simplify things or am I missing something? This would also permit arbitrary nonconvex relevant sets, I believe.\n\n-For the gradient of the distributional mask in proposition 4, isn't this affected by off-policyness due to the constrained sampling of actions from the policy distribution? For example, if most of the policy distribution probability mass lies outside the relevant set (e.g. in the event of transfer learning to a different task with a new relevant set) the actions sampled will not follow the policy distribution closely and thus \\pi_{\\theta}(a|s) will not be an accurate probability of sampling action a at state s. As noted above, this seems like a big issue that should be noted or addressed, unless I'm missing something that corrects for the off-policyness.\n\n-Small quibble: The goal in figure 2 looks black to me on two different monitors, perhaps using a lighter gray would make it more distinct from the agent position?\n\n-It would probably be reasonable to move the environment definitions for the two quadrotor tasks to the appendix to save space in the main paper, FWIW. I'm not sure the abbreviated version that's present provides all that much context over the text descriptions of the tasks.\n\n-It's not critical to have, and I realize it's a difficult thing to derive a relevant set for by nature, but having an experiment on an environment with 10+ action dimensions would be a nice addition to demonstrate that these masking approach can scale to higher dimensional action spaces tractably. I'd also appreciate some comments on compute cost scaling with action dimension count in the limitations or conclusions sections, if possible, since it seems like compute cost is likely to increase with the dimensionality of the action space." }, { "confidence": 4, "rating": 6, "review_id": "OBvL7ywbPj", "review_text": "This paper proposes mathematical formulations for continuous action masking in reinforcement learning, to incorporate domain-knowledge in the form of state-specific sets of relevant actions. It introduces 3 functional forms to extract relevant actions from the original action space, and consider its effect on the policy gradient. The policy gradient does not change much, and the paper shows that the forms compare similarly and better than learning an agent without any knowledge of the continuous action mask at all.\n\n- The problem of action masking in continuous action space is an underexplored one, but could have major impact in efficacy of agents and incorporating domain-specific knowledge.\n- The proposed continuous action masking could potentially be useful for safety demarcations.\n- The paper provides mathematical frameworks to formulate continuous action masking and also derive their (minimal) effect on the policy gradient.\n- The paper is mostly well-written and explains the mathematical derivations quite well. Quick note that Section 2.1 and 2.2 could be made more integrated, currently they seem completely disconnected.\n\n## 1. General applicability of this paper's ideas\nObtaining a state-specific relevant action set can be really hard. The paper, however, makes contradictory statements about this:\n- L5: \"little task knowledge can be sufficient to identify significantly smaller state-specific sets of relevant actions.\"\n- L284-285: \"assume that an appropriate relevant action set can be obtained. Yet, obtaining this can be a major challenge in practice.\"\n\nFrom the experiments on the 3 environments, it already seems like defining the relevant action set requires a lot of domain knowledge about the state space features and the dynamics function.\n\nAs of now, there does not seem to be any way the ideas in this paper could be useful for any practical domain.\n- Can the authors provide some concrete examples of how one can obtain such relevant action sets for problems of practical interest and scale?\n- Can the authors provide any results on a commonly used continuous action space RL benchmark?\n\n## 2. Gains not coming from the policy gradient, but only because of constraining the action space\nThe paper's proposed formulation is interesting because it uses continuous action masking as part of the learning policy and informs the policy gradient update about the continuous action mask. However, when we look at the resultant policy gradients for each mask in Eq. 10, Line173, and Line199, it seems that the policy gradient simply reduces to $\\lambda_\\theta log \\pi_\\theta(a | s)$ for all cases.\n\nSo, the effective change in implementation is just how the action to take in the environment is model: $a^r = g(a)$. But, this **doesn't utilize continuous action masking to improve the policy learning objective** in any meaningful way. Is my understanding correct in this?\n\nAnother observation that validates the claim that policy learning is not influenced much is seen from the results and L248-249. The initial rewards themselves are significantly higher, which means that the action mask just reduces the space of exploration of the agent so much that, as long as it takes a valid action, it would get a high reward.\n\n## 3. Simpler baselines for continuous action masking\nContinuing from the above point, if all that needs to be done is to compare different formulations of g(a), there is a much simpler alternative perspective:\n- Action-Masking as part of environment: Simply, apply the action mask as part of the environment, without changing the PPO objective at all. So, there is an action-masking filter before executing an agent's action in the environment, and ignore if the action is invalid.\n- Sampling-augmented action-masking: Keep sampling actions from $\\pi_\\theta$ until you find a valid action that can pass through the known continuous action-masking map.\n\nThe current PPO baseline is very weak, and does not utilize action-masking at all. It seems most of the learning in PPO is going into the effort of learning the action mask. To really justify this paper's proposed action masking schemes are useful, they must compare against other forms of naive action masking, including the two listed above. This perspective on considering the action mask as part of the environment is also much more generally applicable and does not require any change to the policy gradient update.\n\nListed in weaknesses." } ]
yVu5dnPlqA
MAmmoTH2: Scaling Instructions from the Web
Instruction tuning improves the reasoning abilities of large language models (LLMs), with data quality and scalability being the crucial factors. Most instruction tuning data come from human crowd-sourcing or GPT-4 distillation. We propose a paradigm to efficiently harvest 10 million naturally existing instruction data from the pre-training web corpus to enhance LLM reasoning. Our approach involves (1) recalling relevant documents, (2) extracting instruction-response pairs, and (3) refining the extracted pairs using open-source LLMs. Fine-tuning base LLMs on this dataset, we build MAmmoTH2 models, which significantly boost performance on reasoning benchmarks. Notably, MAmmoTH2-7B’s (Mistral) performance increases from 11% to 36.7% on MATH and from 36% to 68.4% on GSM8K without training on any in-domain data. Further training MAmmoTH2 on public instruction tuning datasets yields MAmmoTH2-Plus, achieving state-of-the-art performance on several reasoning and chatbot benchmarks. Our work demonstrates how to harvest large-scale, high-quality instruction data without costly human annotation or GPT-4 distillation, providing a new paradigm for building better instruction tuning data.
https://openreview.net/pdf/bd04b97c020c0de7784c77b26776ae56292d2d38.pdf
[ { "confidence": 4, "rating": 8, "review_id": "2MgYi6x7tT", "review_text": "The paper proposes a 3-stage pipeline to harvest ex-large-scale instruction data from the pre-training web corpus to enhance LLM reasoning, which involves 1) recalling relevant documents, 2) extracting instruction-response pairs using LLM, and 3) refining the extracted pairs by completing the intermediate reasoning steps using LLM.\n\nThe paper\n\n- proposes an effective pipeline to synthesize large-scale high-quality instruction data, especially for reasonable prompts and reliable answers;\n- empirically validates the effectiveness of scaling up instruction data for reasoning tasks;\n- builds `MAmmoTH2-Plus` models, achieving performance superior to or comparable with previous SotA on various reasoning datasets;\n- provides a ex-large-scale instruction dataset for reasoning tasks, *WebInstruct*, as unique public data resource;\n- conducts extensive ablation studies, providing many insights like:\n - SFT loss is better for LM loss (at least when evaluated on QA tasks);\n - refining extracted instruction pairs by completing the intermediate reasoning steps is significantly helpful;\n - using multiple LLMs to refine the instruction data is usually better than a single LLM;\n - “Education” data (exam-style) are usually better than “Forum” data (discussion-style) (at least when evaluated on QA tasks);\n - even benchmarks conventionally thought very relevant might conflict with each other (GSM & MATH in Table 5), implying limited generalization of LLMs.\n\n- The scaling effect of instruction data is an important empirical question. The paper is the first to scale instruction data to 10M pairs, showing the feasibility and effectiveness of scaling up instruction data (for reasoning tasks).\n- Synthesis of high-quality prompts and answers is important for further data augmentation but rather under-explored. The paper finds an effective method to synthesize reasonable prompt and relatively reliable answers by harvesting from web corpora.\n- `MAmmoTH2-Plus` models achieve performance superior to or comparable with previous SotA on various reasoning datasets.\n- Extensive experiments are conducted on various base models and especially diverse challenging reasoning benchmarks, instead of easy ones with limited scope (e.g. many benchmarks similar to GSM8K), convincingly validating the method's effectiveness.\n- Many insightful and useful observations in ablation studies (as mentioned in the summary).\n- The paper is generally well written to be clear and detailed.\n\n- It might need further consideration about **whether training on *WebInstruct* is compatible with or necessary to be added to existing training pipelines to achieve the best final performance (for reasoning tasks)**. The paper achieves its best performances (`MAmmoTH2-Plus`) with a 2-stage instruction tuning on pre-trained models but doesn’t involve continual pre-training, which should be rather important for models’ reasoning abilities as proved by works like DeepSeek-Math. Pre-training and RL should be out of this work’s scope. But it would be better to further clarify the impacts of 1) continual pre-training, 2) training on *WebInstruct*, 3) final fine-tuning on additional instruction datasets and their combinations.\n - Table 7 shows the performance on reasoning benchmarks of applying 2/3/2+3 on Mistral-7B/Mixtral-8x7B. But **the comparison might be a little unfair**: the domains of the “Public Datasets” are wider than those of the *WebInstruct* with the code generation dataset *Code-Feedback*, but the benchmarks only involve mathematical and scientific reasoning in natural language, which might underestimate the performance of “Public Datasets”, considering the possible confliction between code generation and reasoning in natural language. It might be better to remove *Code-Feedback* from the “Public Datasets” to compare with *WebInstruct*.\n - **To consider 1) continual pre-training**, it is impossible to conduct by yourselves, but a possible workaround could be to make full use of the resources provided by DeepSeek-Math: DeepSeekMath-7B is continually pre-trained from DeepSeek-Coder-Base-v1.5. By comparing performances on reasoning benchmarks of applying 2/3/2+3 on DeepSeek-Coder-Base-v1.5/DeepSeekMath-7B and the two models themselves, a more comprehensive study on the impacts of these training stages can be done.\n- Table 7 shows that, for strong Mixtral-8x7B, the gains of adding *WebInstruct* to “Public Datasets” is marginal, implying that **the effect of *WebInstruct* for strong base models might be limited**.\n\n---\n\n# After rebuttal and discussion\n\nThe authors resolved most concerns and validated that MAmmoTH2 can efficiently substitute continual pre-training in the standard SotA pipeline. The limitation is that MAmmoTH2 fails to combine with continual pre-training to effectively push forward the upper limit.\n\nI decide to change my score to 8.\n\nSuggestions:\n\n- The refinement step is important and the current setting can be seen as distillation from strong models (Mixtral-22B×8 and Qwen-72B). The method could be more promising if it could help self-improvement/weak-to-strong generalization. I highly recommend adding experiments of training Mixtral-22B×8 and Qwen-72B or stronger models in future versions.\n\nConfusions:\n\n- Are training data sizes in experiments for Table 5 controlled to be comparable?\n- What does the Data Source “Base“ mean in Table 5?" }, { "confidence": 4, "rating": 5, "review_id": "GO6bsUWMPw", "review_text": "The paper introduces MAmmoTH2, a novel approach to instruction tuning for large language models (LLMs) by harvesting naturally existing instruction data from the web. The authors develop a three-step pipeline (recall, extract, refine) to collect 10 million high-quality instruction-response pairs without relying on costly human annotation or GPT-4 distillation. Fine-tuning LLMs with this dataset significantly improves performance on reasoning benchmarks. The MAmmoTH2-Plus model, further tuned on public instruction datasets, achieves state-of-the-art results on multiple benchmarks.\n\n- Demonstrates a cost-effective way to collect large-scale, high-quality instruction data from the web.\n- Significant performance gains on reasoning benchmarks, with MAmmoTH2 models outperforming existing models.\n- Comprehensive evaluation across multiple benchmarks, showing robust improvements.\n\n- The approach primarily combines existing methods (data recall, extraction, refinement) rather than introducing fundamentally new concepts or techniques.\n- More explicit comparison with prior work is needed to highlight the unique contributions and differences of this approach.\n- The quality and diversity of the collected data heavily depend on the web sources, which may introduce biases or inconsistencies.\n\n- How does MAmmoTH2 compare directly with other methods that use synthetic or human-annotated data in terms of data quality and model performance?\n- What measures were taken to ensure the quality and relevance of the extracted Q-A pairs from the web?\n- How does the model address potential biases in the web-sourced data, and what steps were taken to mitigate these biases?" }, { "confidence": 5, "rating": 7, "review_id": "K3uVbPJ6SW", "review_text": "This paper proposes an approach to automatically harvest large-scale instruction data from pre-training corpora for reasoning tasks. The main steps include: (1) Recall: training a fastText model to recall relevant documents from the pre-training corpus, similar to DeepSeekMath; (2) Extract: using open-source models with few-shot prompting to extract question-answer pairs from the recalled documents; (3) Refine: prompting open-source models to remove noise, adjust formats, and complete the reasoning process for the extracted question-answer pairs.\n\nUsing this method, the authors harvested 10 million instruction data and trained MAmmoTH2 models. Without relying on closed-source models, MAmmoTH2 achieves excellent performance on various reasoning tasks.\n\n- Method: The motivation is clear, and the idea of automatically extracting instruction data from web data is novel, simple, and scalable.\n- Experiments: The experiments and evaluations are comprehensive and achieve good results.\n- Well Written: The paper is very easy to understand.\n- Reproducibility: The authors have open-sourced part of the corpus, models, and evaluation scripts to ensure the reproducibility of the results.\n\n1. Effectiveness: I wonder if the WebInstruct approach can further improve the performance of state-of-the-art domain models. For example, DeepSeekMath achieved good results by only training on recalled documents and fine-tuning on high-quality data (MATH: DeepSeekMath-7B-Instruct 46.8% vs. MAmmoTH2-7B-Plus's 45.0%). Moreover, since the models have already been trained on SFT data, comparing only the few-shot performance is not comprehensive enough. I suggest also comparing the performance of the Plus version trained with high-quality \"additional instruction datasets\" for most of the experiments. Consider supplementing the following results:\n - Recall + Plus: Directly train on the 18M recalled documents and fine-tune a Plus version to verify if the \"extract + refine\" steps have significant benefits.\n - Recall + Extract + Plus: Directly train on the extracted QA (Fig.5, Extracted QA) with LM/SFT loss and fine-tune a Plus version to verify the benefits of the refine step.\n - In Fig.5, I also recommend reporting the performance after fine-tuning the Plus version for SFT loss vs. LM Loss.\n\n2. Lack of method details:\n - For example, the code for the recall stage and the prompts used for extraction and refinement could be included in the repository or appendix.\n - In Sec. 5.1, I suggest explicitly defining the SFT Loss to help more readers understand it clearly. By \"SFT Loss\", I understand the authors mean \"masking the loss of instruction input\", right?\n\n3. Scalability:\n - The effectiveness of WebInstruct constructed using small models is unknown for larger models; moreover, this approach is difficult to apply to models with hundreds of billions of parameters due to high inference costs.\n - During refinement, the model generate missing explanations. Have you observed and quantified the hallucination phenomenon? If present, such incorrect reasoning processes can negatively impact model training, such as increasing hallucination/bias, especially if the corpus is used for larger models.\n\n4. Minor points:\n - Some citations are missing for baselines in Table 2, e.g., Gemma, Abel, and Rho-1.\n - How can the WebInstruct approach be extended to more general domains? What other issues need to be addressed?\n - A concurrent work, Jiuzhang3.0 [1], is quite similar in motivation and method. It would be better to discuss and compare with it. What are the advantages and issues of MAmmoTH2 compared to Jiuzhang3.0?\n \n---\n\n[1] Zhou, Kun, et al. \"JiuZhang3. 0: Efficiently Improving Mathematical Reasoning by Training Small Data Synthesis Models.\" arXiv preprint arXiv:2405.14365 (2024).\n\nSee weaknesses." }, { "confidence": 3, "rating": 7, "review_id": "iebDbDTkUZ", "review_text": "This paper proposes a method to synthesize instruction tuning data at scale from the pretraining web corpus. The proposed method first recalls relevant documents from the corpus, and then extracts QA pairs, and finally refines the extracted QA pairs with an LLM. The synthesized instruction data proves to be helpful in enhancing the model’s reasoning abilities compared with instruction tuning data from other sources.\n\n1.\tThe proposed method is novel and effective.\n2.\tThe authors conduct extensive experiments to demonstrate that it’s possible to synthesize tuning data from unsupervised text corpus to build strong LLMs that outperform models trained with data collected in existing paradigms.\n3.\tThe paper is well-written and easy to follow. The code and data are released, which will serve as high-quality resources for research and building strong LLMs.\n\nThere lacks a discussion and comparison with a related work “Self-alignment with Instruction Backtranslation” (Li et al., ICLR'24) which also synthesizes instruction tuning data from unlabeled corpus.\n\nLLMs are used in the “extract” and “refine” steps in the proposed pipeline for generating and editing instruction tuning data. Will the choice of LLMs introduce bias into the synthesized data (especially compared with distillation-based methods)?" } ]
yUqUBGioBG
Class Distribution Shifts in Zero-Shot Learning: Learning Robust Representations
Zero-shot learning methods typically assume that the new, unseen classes encountered during deployment come from the same distribution as the the classes in the training set. However, real-world scenarios often involve class distribution shifts (e.g., in age or gender for person identification), posing challenges for zero-shot classifiers that rely on learned representations from training classes. In this work, we propose and analyze a model that assumes that the attribute responsible for the shift is unknown in advance. We show that in this setting, standard training may lead to non-robust representations. To mitigate this, we develop an algorithm for learning robust representations in which (a) synthetic data environments are constructed via hierarchical sampling, and (b) environment balancing penalization, inspired by out-of-distribution problems, is applied. We show that our algorithm improves generalization to diverse class distributions in both simulations and experiments on real-world datasets.
https://openreview.net/pdf/56a6242c839cd0715aaa64931c08b31501e061f9.pdf
[ { "confidence": 4, "rating": 7, "review_id": "nPWaBc6n7v", "review_text": "This paper first investigates the effect of class distribution changes on comparative zero-sample learning by proposing and analysing a class distribution shifts parameter model, leading to the idea that loss minimisation leads to poor performance of representations over the class distribution shifts. Based on this finding, the authors utilise hierarchical sub-sampling and OOD environment balancing methods to obtain robust representations and address the poor performance caused by class distribution changes in zero-sample learning, and experimentally validate the effectiveness of the methods.\n\n1- This paper studies the distribution bias problem caused by challenging unknown attributes in zero-shot learning and proposes an effective solution, which is important and innovative for solving the distribution bias problem in zero-shot learning.\n\n2- The structure is clear. It enables the reader to quickly follow the research ideas and understand the content of each section.\n\n3- Figures and tables are clear and accurate. The figures and tables in this paper are concise and clear, effectively support the ideas or conclusions, and enable the reader to grasp the critical information quickly.\n\n4- Comparison and ablation studies are comprehensive. The authors demonstrated the superiority of their method through many experiments and analysed various factors.\n\n1- Authors should describe their proposed soft-AUC trends in detail and analyse the penalty to help readers understand how they play a role.\n\n2- In Experiment 1, the authors used the attribute blonde hair to shift the class distribution, but we know that some people may be hairless, so can the attribute gender be used to shift the class distribution?\n\n3- The language of this paper needs to be scrutinized and improved. For example, redundant phrases such as \"with respect to\" (line 32) should be avoided. In addition, there are some grammatical errors that need to be improved, such as \"assumes\" in line 333 should be changed to \"assume\", and \"leverages\" in line 350 should be changed to \"average\".\n\n4- WRITING DETAILS: Abbreviated nouns should be introduced the first time they appear, such as “OOD”.\n\nSee Weaknesses" }, { "confidence": 5, "rating": 7, "review_id": "0a2jNyQoau", "review_text": "This paper proposes a robust representation learning method that could assume the shift between seen classes and unseen classes.\n\nGood presentation and sound method.\n\nLack the experiments on the most popular benchmark of zero-shot learning [1] and comparison to some SOTAs, e.g. [2][3].\n\n[1] Zero-shot learning-the good, the bad and the ugly[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2017.\n[2] Rebalanced zero-shot learning[J]. IEEE Transactions on Image Processing, 2023.\n[3] Transzero: Attribute-guided transformer for zero-shot learning[C]//Proceedings of the AAAI Conference on Artificial Intelligence. 2022.\n\nSee weakness" }, { "confidence": 2, "rating": 6, "review_id": "VzTKY37jaG", "review_text": "Zero-shot learning classifiers face the challenge of distribution shifts, where the distribution of new classes differs significantly from that of the training data. In this paper, the authors introduce a novel algorithm to address this problem by creating robust representations through hierarchical sampling and environment balancing penalization. \n\nExperimental results also demonstrate a performance increase compared to the baseline ERM model on several real-world datasets.\n\n1. This paper is well-written and easy to understand.\n2. The paper proposes a new model that enables handling unknown attributes for distribution shifts and addresses new classes at test time.\n3. The method is tested through both simulations and real-world experiments.\n\n1. Some parameters need to be clearly defined, for example, $\\rho_{tr}$, $\\rho_{te}$, and $y_{uv}$ in Eq (4).\n2. The proposed method creates multiple environments and computes penalties across them. What is the computational complexity? It's also beneficial to discuss the time complexity of Algorithm 1.\n3. Figure 5 and Figure 6 do not straightforwardly show the performance.\n\nPlease see the Weakness section above.\n\nAdditional questions:\n\n1. Although the authors mention how to calculate the number of environments, it's better to include an ablation study to test the performance with different numbers of environments." }, { "confidence": 3, "rating": 6, "review_id": "fi36Cw92lV", "review_text": "The paper treats the problem of learning models for zero-shot open-world classification settings (open-world meaning previously unseen classes might appear at test time) that are robust to distribution shifts.\n\nThe proposed approach consists of two stages. In the first stage, synthetic environments $S_i$ are sampled from the training data following a hierarchical sampling approach, where first classes and then data pairs according to sampled classes are sampled. \nIn the second stage, the model is updated to minimise a loss composed of standard ERM and the variance over environment AUC scores.\n\nThe benefits of the method are demonstrated on synthetic data, CelebA, and ETHEC (where also on the latter two a distribution shift is introduced synthetically).\n\n- The proposed approach to generate synthetic environments through hierarchical sampling seems neat and novel (even though the idea of generating synthetic environments for learning robust models is not novel, see weaknesses)\n- Adjusting the performance metric in the variance regularisation term for zero-shot verification, using AUC on embedding distances instead of loss (like in VaRex) seems like a nice way of avoiding performance plateaus and enables better performance in the conducted experiments\n- In the experiments, the proposed method shows significant performance gains over ERM\n- It is good to see a theoretical derivation of the necessary number of sampled environments to achieve a minimum number of examples from each class in at least one environment during the hierarchical sampling with a certain probability (Section 4.4). I believe this result should be made more prominent in form of a Proposition with a proof (in the appendix).\n\n- Lack of baselines wrt generation of synthetic environments: The idea of generating synthetic environments to learn models that are robust to distribution shift is not new, and as such the proposed approach should have been compared to existing methods for this. For example, it would be interesting to see how the approach proposed by the authors compares to the approach of the 'Environment Inference for Invariant Learning' paper by Creager et al. (2021).\n- The real data experiments are still semi-synthetic in the sense that the distribution shift is introduced synthetically (and is quite stark). I do understand that finding a dataset that has a stark enough distribution shift of one attribute inherently in it is hard or even impossible, and the synthetic shifts are good for highlighting the potential merits of the proposed approach. However, what is missing is a report on the performance of the proposed approach (in comparison to baselines) on the unshifted train and test sets of CelebA and ETHEC, to ensure that there is no performance trade-off. \n\nMinor:\n- Figure 2 is never referred to in the text - as a result it is unclear what its purpose is.\n- It would be helpful to refer to the result of Section 4.4 already in Section 4.1, when it is claimed that 'hierarchical sampling results in diverse mixtures of any unknown attribute' and that 'smaller subsets with $k < N_c$ classes are likely to exhibit distinct attribute distributions' Section 4.4. backs up these claims. \n- L 235 typo at the end of the line -> (10)\n- Line 151: $h$ needs to be defined before referring to it in an equation, or immediately after the equation.\n\n- How was the number of synthetic environments in Experiments 1 and 2 on real data chosen? What values of $\\alpha$ from Section 4.4 do they correspond to?" } ]
yUckuDjAE0
Learning Bregman Divergences with Application to Robustness
We propose a novel and general method to learn Bregman divergences from raw high-dimensional data that measure similarity between images in pixel space. As a prototypical application, we learn divergences that consider real-world corruptions of images (e.g., blur) as close to the original and noisy perturbations as far, even if in $L^p$-distance the opposite holds. We also show that the learned Bregman divergence excels on datasets of human perceptual similarity judgment, suggesting its utility in a range of applications. We then define adversarial attacks by replacing the projected gradient descent (PGD) with the mirror descent associated with the learned Bregman divergence, and use them to improve the state-of-the-art in robustness through adversarial training for common image corruptions. In particular, for the contrast corruption that was found problematic in prior work we achieve an accuracy that exceeds the $L^p$- and the LPIPS-based adversarially trained neural networks by a margin of 27.16\% on the CIFAR-10-C corruption data set.
https://openreview.net/pdf/5f88f2e26fc915345a56b203801d1cc3dab0b5c5.pdf
[ { "confidence": 4, "rating": 7, "review_id": "7O30XMWUpk", "review_text": "This paper proposes to use input-convex neural networks to learn Bregman divergences as a means to distinguish semantically meaningful image corruptions from random noise perturbations. The approach is linked to classifier robustness by showing how the associated mirror descent algorithm can be used to perform adversarial training against image corruptions coming from a Bregman ball. Experiments on benchmark corruption datasets show that the proposed method outperforms prior learned similarity metrics in distinguishing corruption from noise and in adversarial training. The proposed method is also shown to generalize quite well to corruptions that it is not trained on.\n\nThe paper is very well-written, easy to follow, and has great visualizations. The proposed method appears novel and performative, and is certainly of interest to the ML and robustness communities.\n\nSee questions below.\n\n1. Line 42: \"...that has to be convex and with invertible gradient.\" Do you mean strictly convex?\n2. Line 37: The title of this paragraph is \"Bregman divergence and mirror descent,\" yet mirror descent is not discussed at all. Please give a brief description of mirror descent here.\n3. Line 64: Please put the footnote number \"1\" after the punctuation (period). With the footnote number before the punctuation, it makes it look like an exponent. Please also do the same for all other footnotes.\n4. Line 73: In (3), do you mean \"argmin\" instead of \"min\"?\n5. Table 1: Please write out \"ICNN\" completely as \"input convex neural network\" here, or define the acronym somewhere in the text before Table 1 for readers unfamiliar with ICNNs.\n6. Line 79: This sentence looks strange starting with $\\mathbb{B}_h$; I suggest adding \"The ball\" to the beginning of the sentence.\n7. Table 1: Do you mean \"strictly convex\" instead of \"strongly convex\" in the text underneath the base function?\n8. Line 79: \"...but not necessarily convex.\" This is not true; Bregman balls, as you've defined them, are indeed always convex. Since $h$ is convex, its domain $\\mathcal{X}$ is convex, and therefore convexity of the Bregman ball (4) follows from convexity of $\\mathcal{X}$ together with convexity of $D_h$ in its first argument. On the other hand, if you were to have defined the ball with respect to a fixed point in the first argument of $D_h$ and varying second arguments, then the set is not necessarily convex.\n9. Line 102: I don't recall ever seeing an ICNN defined in terms of Hadamard squares of the input feedthroughs. Can you explain why you are choosing to define your ICNN model using these Hadamard squares, and how the properties of this model might differ from just using the standard linear feedthroughs with arbitrary (not necessarily nonnegative) weights? Your model seems somewhat restrictive in how much influence the feedthrough may have, as its contributions to each preactivation vector are always nonnegative.\n10. Line 123: Again, starting a sentence with math looks strange.\n11. Figure 4: How come Moco appears to perform on par with or better than the other two prior learning-based methods in Figure 4b, but it's accuracy is shown as 0 across all noise levels in Figure 4a?" }, { "confidence": 4, "rating": 4, "review_id": "cVi9TBpwmZ", "review_text": "The authors present an approach to learn Bregman divergences that capture perceptual image similarities according to a given dataset.\nRelying on two input-convex neural networks, they present a procedure that mimics mirror descent over the learned Bregman divergence.\nThe procedure is used to learn networks that are robust to image corruptions. Results on CIFAR-10-C subsets are presented.\n\nThe idea to (attempt to) do mirror descent over learned Bregman divergences in order to train networks robust to corruptions is novel and interesting.\n\nWhile the idea in itself is interesting, I do not find neither the technical presentation, nor the provided results convincing.\n\nMost of the motivation of the work derives from the use of Bergman divergences, which come with an associated mirror descent. However, in practice, I do not think the authors can be claiming to do mirror descent, because of the approximation of the inverse map, and because of the lack of a projection operator. Given the two above limitations, I do not think what the authors do would have convergence guarantees even in the convex case. Taking this into account, the stress on the mathematical motivation of the approach seems to be a bit fragile. Sometimes mathematical concepts are introduced without a clear purpose (for instance, the Legendre type, which is then not really necessary to justify their approximation of the inverse map). I would urge the authors to tone down these claims and refrain from saying they are doing mirror descent: I'd rather call it an approach \"inspired by mirror descent\".\n\nFurthermore, the work assumes that a dataset describing the corruptions is available to learn the divergence: is this a reasonable assumption for datasets such as CIFAR-10-C, which are designed as benchmarks for OOD generalization?\n\nConcerning the results: I do not think the proposed comparisons are fair. Both l2 PGD and RLAT use a threat model which is general and not targeted at specific perturbations. As such, they attain good performance over the entirety of the CIFAR-10-C corruptions. The authors, instead, focus on a small set of perturbations and show results for an algorithm that is explicitly aware of the perturbations the network need to be robust against.\n\n1) How is the dataset for the training of the Bergman divergence obtained? Is this a holdout from the original CIFAR-10-C?\n2) Could the authors add the performance against noise-like perturbations in the comparison against PGD and RLAT? \n3) Would the proposed approach scale to ImageNet-C? I do not think scaling to ImageNet-C is necessary, but I think such discussions should be included.\n4) The employed ICNN is different from the original work [1] as it displays quadratic terms. I understand that the squared term on x added on top of $z^l$ is useful for strong convexity. What about the quadratic terms in equation (5)?" }, { "confidence": 4, "rating": 5, "review_id": "BgS0iO1Afm", "review_text": "The authors propose a new method to learn Bregman divergences from raw, high-dimensional data. This method measures similarity between images in pixel space, and considers two images as similar even if one image is corrupted by real-world corruptions, such as blur, changes in contrast, or weather conditions such as fog. The method does this in-part by simultaneously considering real-world corruptions as close to the original image, while noisy perturbations as far from the original image, even when the $L^p$ distance considers noisy perturbations as close. The authors then define adversarial attacks by replacing the projected gradient descent with mirror descent using the learned Bregman divergence. Through adversarial training on this new learned Bregman divergence, they improve the state-of-the-art in robustness.\n\n- The authors clearly explain the pipeline of the algorithm and give great explanations for the choices they made (e.g. using equation (7) to approximate $\\nabla \\bar{\\phi}$.)\n\n- The authors make a good case for using Bregman divergences and learning the metric, and it seems like an interesting direction.\n\n- The algorithm seems well-motivated at each step of the pipeline, and it looks like the authors took care to make sure each step follows theory. Figures 1, 2, and 3 are helpful in explaining the motivation.\n\n- A big weakness is using only one dataset for comparison. Perhaps the authors could show more experiments on ImageNet-C, and/or the Berkeley-Adobe Perceptual Patch Similarity (BAPPS) that was introduced with one of the methods the authors compared against, LPIPS.\n - On the above note, there are a lot of parts to the algorithm, and it's unclear how hard one has to tune the algorithm to make sure each approximation lines up to get an overall, well-performing model. I would have wanted to see a stress test on the pipeline with larger images.\n - If the authors had comparisons on more datasets and of more difficulty than CIFAR10-C, then I would be inclined to raise the score to an accept.\n\n- The proposed method takes longer to train than other standard adversarial training methods, as mentioned in the appendix.\n\nEDIT: After considering the author responses and reading all the reviewers, I have raised my score from a 4 to a 5.\n\n- Can we see some examples of noisy images that are considered not semantically the same as the original? Under some noise threshold, it seems reasonable for a human to classify a noisy image as the original label, right?\n\n- In Table 3, when using the proposed method, training for one corruption doesn't necessarily perform the best for that corruption (as acknowledged by the authors in the paper). For example, $\\text{MD} \\thinspace D_{\\phi}^{\\text{contrast}}$ does not perform the best on contrast corruptions, but rather $\\text{MD} \\thinspace D_{\\phi}^{\\text{zoom-blur}}$ does the best. Do you have an explanation for that? I would have liked to see this explored and explained, as it goes against intuition.\n\n- There are a lot of approximations, like computing the inverse map. How do you expect your algorithm to perform with higher resolution images?" } ]