Dataset Viewer
Auto-converted to Parquet
context
stringlengths
101
1.75k
A
stringlengths
103
2.54k
B
stringlengths
103
1.92k
C
stringlengths
103
1.91k
D
stringlengths
101
2.37k
label
stringclasses
4 values
Nonetheless, in modern large-scale assessments or surveys where the data collection scope is unprecedentedly big and high-dimensional, both N𝑁Nitalic_N and J𝐽Jitalic_J can be quite large.
JML is currently considered the most efficient tool for estimating GoM models. However, due to its iterative manner, JML’s efficiency is still unsatisfactory when applied to modern big datasets with many observations and many items. Therefore, it is desirable to develop more scalable and non-iterative estimation methods and aid psychometric researchers and practitioners in performing GoM analysis of item response data.
Although this JML algorithm is computationally more efficient compared to MCMC algorithms, it is still not scalable to very large-scale response data due to its iterative manner. Therefore, it is of interest to develop a non-iterative estimation method suitable to analyze modern datasets with a large number of items and subjects.
JML is currently considered the most efficient tool for estimating GoM models. However, due to its iterative manner, JML’s efficiency is still unsatisfactory when applied to very large-scale data with many observations and many items. Therefore, it is desirable to develop more scalable and non-iterative estimation methods and aid psychometric researchers and practitioners to perform GoM analysis of modern item response data.
Our method provides comparable estimation results compared with JML and Gibbs sampling, and is much more scalable to large datasets with many subjects and many items.
A
1)^{2}}=\frac{B(N-1)+2+1/(N-1)}{16(N-1)^{2}}.italic_γ - ∑ start_POSTSUBSCRIPT italic_j = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_N end_POSTSUPERSCRIPT italic_a start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT < divide start_ARG ⌊ roman_log start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ( italic_N - 1 ) ⌋ + 3 + 1 / ( italic_N - 1 ) ) end_ARG start_ARG 16 ( italic_N - 1 ) start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT end_ARG = divide start_ARG italic_B ( italic_N - 1 ) + 2 + 1 / ( italic_N - 1 ) end_ARG start_ARG 16 ( italic_N - 1 ) start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT end_ARG .
Operations that are carried out to obtain each new term of the series, and to update the partial sum and error bound. The total number of these operations is roughly proportional to the total number of terms in the partial sum when the algorithm ends, NMsubscript𝑁𝑀N_{M}italic_N start_POSTSUBSCRIPT italic_M end_POSTSUBSCRIPT.
Standard simulation methods, as implemented in computer software, typically define the parameter τ𝜏\tauitalic_τ as a floating-point value. This inevitably incurs a loss of precision. More specifically, it is not possible to represent irrational values (or even certain rational values) exactly using floating-point variables. The method presented in this paper avoids that problem, and generates a random variable with the exact parameter τ𝜏\tauitalic_τ.
This paper describes a general algorithm that solves the above problem when there exists a representation of τ𝜏\tauitalic_τ as a series with rational terms. The algorithm is described in §2, and its basic properties are addressed. The complexity of the algorithm is analysed in §3. Application to specific values, including Euler’s constant γ𝛾\gammaitalic_γ and π/4𝜋4\pi/4italic_π / 4, is discussed in §4. Conclusions and future work are presented in §5. Proofs to all results are given in §6.
Thanks to Peter Occil for bringing to the author’s attention the problem of simulating Euler’s constant without using floating-point arithmetic, which led to the algorithm and results presented in this paper; also for pointing out reference [11], and for some corrections to an early version of the manuscript.
D
However, there does not seem to exist theoretical guarantees for the SCMS algorithm to consistently estimate the full ridge set, and, as discussed below (see Section 2.3 and the Appendix), the SCMS algorithm might miss some parts of the ridge, although the point-wise convergence property of SCMS is studied in Zhang and Chen (2023). In other words, it is not entirely clear what the SCMS is actually estimating. Even though it appears that this does not have a serious impact in many practical examples, this theoretical gap provides a motivation for developing alternative ridge finding algorithms that (i) come with theoretical guarantees offering deeper insights to their behavior, and (ii) do not suffer from potentially missing some parts of the ridge. We mention in passing that there exists another ridge estimation algorithm developed in Pulkkinen (2015), which tracks the ridge lines. However, it relies on a starting point that has been on or close to the ridge.
The remaining part of the paper is organized as follows. In Section 2 we introduce the formal definition of ridges. This is followed by our extraction algorithms, whose performance is illustrated using some numerical studies in ℝ2superscriptℝ2\mathbb{R}^{2}blackboard_R start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT. The main theoretical results are given in Section 3, where we give the convergence results of our algorithms. The mathematical framework for the theoretical analyses is provided in Section 4. In Appendix A we give an example for which the SCMS algorithm fails to detect a part of the ridge while our algorithms do not miss it. All the proofs are provided in Appendix B.
We apply our algorithms to a data set of active and extinct volcanoes in Japan available at https://en.wikipedia.org/wiki/List_of_volcanoes_in_Japan. The locations of these volcanoes exhibit a clear filamentary structure with three major branches sharing an intersection. The results using SCMS and our algorithms are shown in Figure 2.6. We used the same bandwidth for all the three algorithms based on an optimal selection for the second derivatives of the kernel density estimation. Using all the sample points as starting points, the outputs of the algorithms are shown in the three left panels. It can be seen that all the three algorithms can capture the three major branches in the data, however, a careful examination reveals that the output of the SCMS algorithm seems to have big gap near the intersection of the three branches. To further investigate this issue, we ran each algorithm with a new set of starting values while keeping all the tuning parameters the same. The results are shown in the three right panels. The new starting points are constructed as follows: For each of the n𝑛nitalic_n outputs of an algorithm, connect each of the 20-nearest neighbors among the original data points to the output point by a line segment. On each of these 20 line segments choose 10 equidistant points. The resulting 200∗n200𝑛200*n200 ∗ italic_n points are the new starting values of the respective algorithm. The idea underlying this construction is to find starting values that form a dense neighborhood of the true ridge lines. We observe that, although these start points fill the gap near the intersection well, the detected branches of SCMS algorithm are still clearly separated, while the branches are connected better in the outputs of our algorithms. Also see Section A.1 for similar results in simulations. This is consistent with our arguments that SCMS algorithm may miss some parts of the ridges.
Brief outline of this section: As the algorithms are targeting Ridge(f^)^𝑓(\widehat{f})( over^ start_ARG italic_f end_ARG ), while the theoretical target is Ridge(f),𝑓(f),( italic_f ) , we first control the distance between these two sets (see Theorem 6). Then, in Theorem 8, we consider the continuous version of the algorithms, where the paths traced by the algorithm are replaced by the integral curves generated by our ridgeness functions. Finally, we consider the discrete version (see Theorem 10), i.e. the actual algorithms, and control the distance between the limit points of the algorithms and Ridge(f^)^𝑓(\widehat{f})( over^ start_ARG italic_f end_ARG ).
This important section can be interpreted as providing population level versions of our main convergence results for the proposed algorithms presented above. Indeed, the algorithms can be interpreted as ‘perturbed versions’ of corresponding population level versions. We will discuss the precise meaning of this in what follows, and we also indicate how this correspondence is being used to prove the convergence results for the algorithms. This section will also provide additional insights into why the algorithms proposed in this work do not suffer from the theoretical gaps of the SCMS algorithm - cf. Section 4.2.1.
A
\prime}(\bm{r}_{k})\})bold_italic_V start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT = roman_diag { italic_ψ start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ( bold_italic_r start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ) } ( bold_italic_I start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT - bold_italic_X over^ start_ARG bold_italic_A end_ARG start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT bold_italic_X start_POSTSUPERSCRIPT ⊤ end_POSTSUPERSCRIPT roman_diag { italic_ψ start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ( bold_italic_r start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ) } ).
For a small constant η>0𝜂0\eta>0italic_η > 0 independent of n,p𝑛𝑝n,pitalic_n , italic_p, say η=0.05𝜂0.05\eta=0.05italic_η = 0.05,
error, up to an arbitrary small constant η>0𝜂0\eta>0italic_η > 0 independent of n,p𝑛𝑝n,pitalic_n , italic_p.
If (M,q,η,μ,γ)𝑀𝑞𝜂𝜇𝛾(M,q,\eta,\mu,\gamma)( italic_M , italic_q , italic_η , italic_μ , italic_γ ) and η~>0~𝜂0\tilde{\eta}>0over~ start_ARG italic_η end_ARG > 0 are independent of n,p𝑛𝑝n,pitalic_n , italic_p then
|ψ|≤M𝜓𝑀|\psi|\leq M| italic_ψ | ≤ italic_M for some constant M𝑀Mitalic_M independent of n,p𝑛𝑝n,pitalic_n , italic_p;
A
Clearly, constantly changing the target density within a Monte Carlo algorithm difficult its theoretical analysis. Moreover, in MCMC algorithms, updating the surrogate using past states of the chain produces the loss of Markov property, so (as in the adaptive MCMC literature) one needs to carefully address this point [95, 20].
Secondly, the surrogate construction is driven by the Monte Carlo algorithm, namely, the surrogate is refined in regions discovered by the algorithm along the iterations.
In the two-stage scheme, the process of building the surrogate and performing the sampling is separated. During the initial stage, the primary objective is to minimize the bias of the surrogate. In the subsequent stage, our focus shifts to reducing the variance of the Monte Carlo approximation of the surrogate. Due to the inexpensive nature of evaluating the surrogate, it becomes possible to obtain Monte Carlo approximations with extremely low variance. However, employing surrogates introduces a bias into the final estimators.
A generic MH algorithm targeting a surrogate that is refined over T𝑇Titalic_T iterations is given in Algorithm 3. This algorithm falls within the iterative refinement scheme from the previous section.
Blocks B1 and B3 refer to the two possible strategies (offline or iteratively within the Monte Carlo steps) for building the surrogate. The former considers an offline construction, that is totally independent of the Monte Carlo algorithm that will be run afterwards. The latter construction aims to build the surrogate online, i.e., during the Monte Carlo iterations. Lastly, Block B4 refers to making a correction for the fact that we are working w.r.t. p^⁢(𝜽)^𝑝𝜽\widehat{p}({\bm{\theta}})over^ start_ARG italic_p end_ARG ( bold_italic_θ ), and ultimately implies obtaining a noisy realization p~M⁢(𝜽)subscript~𝑝𝑀𝜽\widetilde{p}_{M}({\bm{\theta}})over~ start_ARG italic_p end_ARG start_POSTSUBSCRIPT italic_M end_POSTSUBSCRIPT ( bold_italic_θ ).
A
We prove uniform rates for an RKHS estimator of the long term dose response curve, under effective dimension and smoothness conditions on the regression and embedding. An interesting direction for future work is whether uniform rate improvements are possible by placing additional assumptions, perhaps building on the techniques from pointwise, mean square, or excess risk analysis [Kennedy et al., 2017, Nie and Wager, 2021, Foster and Syrgkanis, 2023, Zeng et al., 2024].
Section 6 demonstrates that our long term dose response estimators recover the long term effects of continuous actions reasonably well, using real data.
To demonstrate that our proposed kernel methods are practical for empirical research, we evaluate their ability to recover long term dose response curves. Using short term experimental data and long term observational data, our methods measure similar long term effects as an oracle method that has access to long term experimental data. Our methods outperform some benchmarks from previous work that use only long term observational data.
We illustrate the practicality of our approach by estimating the long term dose response of Project STAR, modelling class size as a continuous action. By allowing for continuous actions and heterogeneous links, our long term dose response estimate suggests that the effects of class size are nonlinear. Using short term experimental data and long term observational data, our method measures similar long term effects as an oracle method that has access to long term experimental data.
Previous methods using kernels for continuous actions do not handle the complex linkage between short term and long term effects across data sources, and therefore cannot analyze long term causal inference. Our contribution is a method to do so.
B
As seen in Section 4.3.2, the performance of SAA is poor for certain instances even with infinitely many samples. Our goal in this section is twofold: first, we complement the discussion sparked by Proposition 7 by investigating whether there exists a policy which can ensure a vanishing regret for the pricing problem under Wasserstein heterogeneity. Then, we discuss open directions towards a principled approach to design and analyze policies beyond SAA.
We next present an alternative sample-size-agnostic policy for which the asymptotic worst-case vanishes as ϵitalic-ϵ\epsilonitalic_ϵ goes to 00. Furthermore, we characterize the worst-case performance of that policy, showing that it has the best possible dependence with respect to ϵitalic-ϵ\epsilonitalic_ϵ.
We show in Section 4.3 that the upper bounds based on the approximation parameter directly imply bounds on the worst-case regret of SAA for Newsvendor under both heterogeneity types and for pricing under the Kolmogorov heterogeneity. Furthermore, we complement these results with lower bounds on the best achievable performance and show that for these three settings, SAA achieves the best possible dependence in ϵitalic-ϵ\epsilonitalic_ϵ and its asymptotic worst-case regret scales linearly with ϵitalic-ϵ\epsilonitalic_ϵ. For pricing under Wasserstein distance, the picture is starkly different: the approximation parameter becomes infinite, and hence does not allow us to derive any meaningful upper bound on the worst-case regret of SAA. As a matter of fact, we show that SAA performs extremely poorly and the asymptotic regret does not even vanish when ϵitalic-ϵ\epsilonitalic_ϵ goes to 00. Hence, the performance of SAA deteriorates considerably by slightly deviating from the i.i.d. regime for this class of problems.
For the Wasserstein distance, we show that similarly to pricing, SAA incurs a worst-case regret which does not shrink to 00 as ϵitalic-ϵ\epsilonitalic_ϵ goes to 00; but a policy which inflates the SAA decision appropriately achieves rate-optimality. The proof techniques and results derived for ski-rental leverage the structure of the problem and could be of independent interest.
Recall the pricing problem with Wasserstein distance introduced in Section 4.3.2. For this problem, we have seen that SAA incurs an asymptotic worst-case regret which is not vanishing as ϵitalic-ϵ\epsilonitalic_ϵ goes to 00.
D
We run 50 simulations and we compare the chain ladder model performance to the following three sets of potential models:
Models EIRsubscriptEIR\text{EI}_{\text{R}}EI start_POSTSUBSCRIPT R end_POSTSUBSCRIPT on the test set across the NAIC datasets. On each dataset we selected the best performing model via a validation set from the three families
For validation and testing, we adopt the approach illustrated in Figure 12. In particular, on each of the 50505050 runs and for each of the three model families (sets (a), (b) and (c)), we first choose the model that minimizes the EIRsubscriptEIR\text{EI}_{\text{R}}EI start_POSTSUBSCRIPT R end_POSTSUBSCRIPT on the validation data. Secondly, we measure the performance of the selected model on the test data.
We displayed the EIRsubscriptEIR\text{EI}_{\text{R}}EI start_POSTSUBSCRIPT R end_POSTSUBSCRIPT for the different lines of business in Figure 9. The plot shows, for each family of model the EIRsubscriptEIR\text{EI}_{\text{R}}EI start_POSTSUBSCRIPT R end_POSTSUBSCRIPT on the test of the best performing model among each family for each data set. Interestingly, the union of the apc and clmplus families of models always obtains a lower prediction error on the reserve compared to using only one of the two families. This indicates that the validation procedure of picking the best model by evaluating the performance on the last diagonal works well in the cases considered.
To evaluate the performance of each set we start by splitting the data into training, validation and testing as illustrated in Figure 11 in Appendix E. Then, for each dataset the best model within the three different model sets is selected based on the validation set, and, finally, the error incidence (EImsubscriptEI𝑚\text{EI}_{m}EI start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT) of this best model is then calculated on the test set.
B
Detailed analysis on model selection (i.e. choice of models in SuperLearner), choice of positivity constant ϵitalic-ϵ\epsilonitalic_ϵ and potential outliers is presented in the supplementary materials, which justifies the choice in our analysis. The effects of high fruit intake on preterm birth, preeclampsia, gestational diabetes and SGA birth are -0.0214 (95% CI [-0.0266, -0.0162]), -0.012 (95% CI [-0.0176, -0.00626]), 0.00114 (95% CI [-0.00398, 0.00626]), -0.0164 (95% CI [-0.0227, -0.0101]), respectively. The effects of high vegetable intake on preterm birth, preeclampsia, gestational diabetes and SGA birth are -0.0442 (95% CI [-0.0491, -0.0393]), -0.00102 (95% CI [-0.0122, 0.0102]), 0.024 (95% CI [0.0191, 0.029]), -0.0166 (95% CI [-0.0218, -0.0113]).
As described in detail previously (Haas et al.,, 2015), nuMoM2b enrolled 10,083 people in 8 US medical centers from 2010 to 2013. Eligibility criteria included a viable singleton pregnancy, 6-13 completed weeks of gestation at enrollment, and no previous pregnancy that lasted ≥20absent20\geq 20≥ 20 weeks of gestation. All participants provided informed, written consent and the study was approved by each site’s institutional review board. At enrollment (6-13 completed weeks of gestation), women completed a semi-quantitative food frequency questionnaire querying usual periconceptional dietary intake. The full dietary assessment approach has been published (Bodnar et al.,, 2020). The treatments of interest are servings of fruits and vegetables per day. Other food groups served as important confounders: whole grains, dairy products, total protein foods, seafood and plant proteins, fatty acids, refined grains, sodium and “empty” calories. At least 30 days after delivery, a trained certified chart abstractor recorded final birth outcomes, medical history, and delivery diagnoses and complications. This information provides us with data on responses: preterm birth, SGA birth, gestational diabetes and preeclampsia. Data were also ascertained on maternal age, race, education, prepregnancy body mass index, smoking status, marital status, insurance status and working status. In our analysis, let the threshold be the 80% quantile of total fruit or vegetable intake. Total fruit/vegetable intake higher than the 80% quantile is considered treated (A=1)𝐴1(A=1)( italic_A = 1 ), otherwise not treated (A=0)𝐴0(A=0)( italic_A = 0 ). For the outcome, when the adverse pregnancy outcome occurs, we let Y=1𝑌1Y=1italic_Y = 1. Otherwise Y=0𝑌0Y=0italic_Y = 0. We will focus on the causal effects of fruit and vegetable intake on the aforementioned adverse pregnancy outcomes.
Preterm birth, small-for-gestational-age birth, preeclampsia, and gestational diabetes are adverse pregnancy and birth outcomes that contribute to one-quarter of infant deaths in the U.S. and pose a tremendous economic and emotional burden for societies and families (Butler et al.,, 2007; Stevens et al.,, 2017; Dall et al.,, 2014). Maternal nutrition is one of the few known modifiable risk factors for adverse pregnancy outcomes (Stephenson et al.,, 2018). Preventing poor outcomes by optimizing preconception dietary patterns is therefore a public health priority. We previously showed that diets with a high density of fruits and vegetables were associated with a reduced risk of poor pregnancy outcomes (Bodnar et al.,, 2020). In the current analysis, we estimated the causal effects of fruit and vegetable intake on adverse pregnancy outcomes in the whole U.S. population of pregnant women. Ideally, we would conduct randomized controlled trials on this population, or a random sample of it, and apply standard causal inference techniques to estimate the ATE. However, sampling and doing experiments on the U.S. pregnant population is not feasible. Therefore, we will transport the causal effects obtained from our prior work using data from the Nulliparous Pregnancy Outcomes
Our methods with above adjustments are applied to each combination of treatment A∈{fruit intake,vegetable intake}𝐴fruit intakevegetable intakeA\in\{\text{fruit intake},\text{vegetable intake}\}italic_A ∈ { fruit intake , vegetable intake } and outcome Y∈{preterm Birth, SGA birth, gestational diabetes,Y\in\{\text{preterm Birth, SGA birth, gestational diabetes,}italic_Y ∈ { preterm Birth, SGA birth, gestational diabetes, preeclampsia}}\}}. All the nuisance functions are fitted with “SL.ranger” (random forests) and “SL.glmnet” (penalized GLM) and “SL.mean” in the R package “SuperLearner”. Five-fold cross fitting is used to guarantee the sample splitting condition in Theorem 5. The results are summarized in Figure 3.
From the results above, we see the effects of high fruit intake on preterm birth, preeclampsia and SGA birth are significantly negative at level 0.05 in the target population, which implies eating more fruit potentially causes a lower risk of these adverse pregnancy outcomes. For the results on vegetables, the effect of high vegetable intake on preterm birth and SGA birth is significantly negative. The strict interpretation for the effects of high vegetable intake on preterm birth is: Compared with participants whose vegetable intake is
D
To summarize, the main contributions of this work are: (a) Basis encoding We propose introducing a slight modification to the computational graph of FM variants that facilitates encoding of numerical features as vector of basis functions; (b) Spanning properties We show that our modification makes any model from a family of FM variants learn segmentized functions spanned by pairwise tensor products of the basis of our choice, and inherits the approximation power of that basis; (c) B-Spline basis We justify the use of the B-Spline basis from both theoretical and practical aspects, and demonstrate their benefits via numerical evaluation; (d) Ease of integration into an existing system We show how to integrate a model trained according to our method into an existing recommender system which currently employs numerical feature binning, to significantly reduce integration costs; (e) Simplified numerical feature engineering the strong approximation power of the cubic B-Spline basis allows building accurate models without investing time and effort to manually tune bin boundaries, and use simple uniform break-points instead.
A very simple but related approach to ours was presented in Covington et al. (2016). The work uses neural networks and represents a numerical value z𝑧zitalic_z as the triplet (z,z2,z)𝑧superscript𝑧2𝑧(z,z^{2},\sqrt{z})( italic_z , italic_z start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT , square-root start_ARG italic_z end_ARG ) in the input layer, which can be seen as a variant of our approach using a basis of three functions. Another approach which bears similarity to ours also comes from works which use deep learning, such as Cheng (2022); Gorishniy et al. (2021); Song et al. (2019); Guo et al. (2021). In these works, first-order splines are used in the input layer to represent continuity, and the representation power of a neural network compensates for the weak approximation power of first-order splines. Here we do the opposite - we use the stronger approximation power of cubic splines to compensate for the weaker representation power of FMs.
Since our approach works on any tabular dataset, and isn’t specific to recommender systems, we mainly test our approach versus binning on several tabular data-sets with abundant numerical features that have a strong predictive power: the California housing (Pace & Barry, 1997) , adult income (Kohavi, 1996), Higgs (Baldi et al., 2014) (we use the 98K version from OpenML (Vanschoren et al., 2014)), and song year prediction (Bertin-Mahieux et al., 2011). For the first two data-sets we used an FFM, whereas for the last two we used an FM, both provided by Yahoo (Yahoo-Inc, 2023), since FFMs are significantly more expensive to train when there are many columns, and even more so with hyper-parameter tuning. The above data-sets were chosen since they were used in a previous line of work by Gorishniy et al. (2021; 2022) on numerical features in tabular data-sets. We chose the subset of these data-sets whose labels are either real-valued or binary, since multi-class or multi-label classification problems require a model that produce a scalar for each class. Factorization machines are limited to only one scalar in their output.
Putting aside the FM variants, there is a large body of work dealing with neural networks training over tabular data (Arik & Pfister, 2021; Badirli et al., 2020; Gorishniy et al., 2021; Huang et al., 2020; Popov et al., 2020; Somepalli et al., 2022; Song et al., 2019; Hollmann et al., 2022). Neural networks have the potential to achieve high accuracy and can be incrementally trained on newly arriving data using transfer learning techniques. Additionally, due to the universal approximation theorem (Hornik et al., 1989), neural networks are capable of representing a segmentized function from any numerical feature to a real number. However, the time required to train and make inferences using neural networks is significantly greater than that required for factorization machines. Even though some work has been done to alleviate this gap for neural networks by using various embedding techniques (Gorishniy et al., 2022), they have not been able to outperform other model types. As a result, in various practical applications, FMs are preferred over NNs. For this reason, in this work we focus on FMs, and specifically on decreasing the gap between the representation power of FMs and NNs without introducing a significant computational and conceptual complexity.
Finally, any comprehensive discussion on tabular data would be incomplete without mentioning gradient boosted decision trees (GBDT) (Chen & Guestrin, 2016; Ke et al., 2017; Prokhorenkova et al., 2018), which are known to achieve state-of-the-art results (Gorishniy et al., 2021; Shwartz-Ziv & Armon, 2022). However, GBDT models aren’t useful in a variety of practical applications, primarily due to significantly slower inference speeds,namely, it is challenging and costly to use GBDT models to rank hundreds of thousands of items in a matter of milliseconds.
C
The impact of our contributions lies with a modular and scalable formulation of synthetic AIF agents. Using variational calculus, we have derived general message update rules for GFE-based control. This allows for a modular approach to synthetic AIF, where custom message updates can be derived and reused across models (Cox et al., 2019). As an example we have derived GFE-based messages for a general configuration of two facing nodes, and applied these results to derive specific messages for a discrete-variable goal-observation submodel that is often used in AIF practice.
The general update rules allow for deriving GFE-based messages around alternative sub-models, including continuous-variable models and possibly chance-constrained models (van de Laar
The impact of our contributions lies with a modular and scalable formulation of synthetic AIF agents. Using variational calculus, we have derived general message update rules for GFE-based control. This allows for a modular approach to synthetic AIF, where custom message updates can be derived and reused across models (Cox et al., 2019). As an example we have derived GFE-based messages for a general configuration of two facing nodes, and applied these results to derive specific messages for a discrete-variable goal-observation submodel that is often used in AIF practice.
In this section we apply the general message update rules of Sec. 4.4 to a specific discrete-variable model that is often used in AIF practice. Using the general results we derive messages on this specific model.
The message updates for a data-constrained observation variable (Fig. 3, left) reduce to standard VMP updates, as derived by (van de Laar, 2019, App. A).
A
Comparison with SNN on Real-data. We tried to also compare with SNN on the Glance dataset. In this dataset, the number of users is 1305130513051305 and the number of content items is 1471147114711471. Unfortunately, SNN was not able to finish running in a reasonable time. Indeed, even on the synthetically generated dataset with only 300300300300 users and 300300300300 items, SNN could not finish running within 24 hours as reported in the table because it is not computationally efficient (see section 4.2. in [4]). To be able to compare the with our algorithm, we would require a distributed implementation of SNN, which is beyond the scope of this work. This also means that our proposed algorithm is meaningful.
Table 1: Comparison of performance of MNN and USVT on Glance data. As can be seen, MSE for MNN is >28x better.
Table 3: MSE, MAE, and runtime for MNN and SNN on synthetic datasets (average ±plus-or-minus\pm± standard deviation across 10 experimental repeats). −⁣−--- - means the method did not complete within 24242424 hours.
Table 2: R2superscript𝑅2R^{2}italic_R start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT, MSE, MAE, and max error for matrix completion methods on synthetic datasets (average ±plus-or-minus\pm± standard deviation across 10 experimental repeats).
Comparison with SNN on Real-data. We tried to also compare with SNN on the Glance dataset. In this dataset, the number of users is 1305130513051305 and the number of content items is 1471147114711471. Unfortunately, SNN was not able to finish running in a reasonable time. Indeed, even on the synthetically generated dataset with only 300300300300 users and 300300300300 items, SNN could not finish running within 24 hours as reported in the table because it is not computationally efficient (see section 4.2. in [4]). To be able to compare the with our algorithm, we would require a distributed implementation of SNN, which is beyond the scope of this work. This also means that our proposed algorithm is meaningful.
B
In preliminary work presented at QEST 2022 (Kofnov et al., 2022), we provided a solution to this problem leveraging the theory of general Polynomial Chaos Expansion (gPCE) (Xiu and Karniadakis, 2002), which consists of decomposing a non-polynomial random function into a linear combination of orthogonal polynomials. gPCE theory, upon which our approach is based, assures that the polynomial approximation of non-polynomial square-integrable functions converges to the truth by increasing the degree of the polynomial and guarantees the estimation of moments of random variables with complex probability distributions. Once such a polynomial approximation is applied, we take advantage of the work in (Bartocci et al., 2019, 2020) to automatically estimate the moment-based invariants of the loop state variables as closed-form solutions.
In Fig. 1 we illustrate our gPCE-based approach via the Taylor rule in monetary policy, where we estimate the expected interest rate given a target inflation rate and the gross domestic product (GDP). In this example, we approximate the original log function with 5555th degree polynomials and obtain a Prob-solvable loop. This enables the automatic computation of the gPCE approximation of the moments in closed-form at each loop iteration (n𝑛nitalic_n) using the approach proposed in (Bartocci et al., 2019).
The program uses a non-polynomial function (log) in the loop body to update the continuous-state variable (i𝑖iitalic_i). The top right panel contains the Prob-Solvable loop (with polynomial updates) obtained by approximating the log function using polynomial chaos expansion (up to 5555th degree). In the bottom left, we compute the expected interest rate (𝔼⁢[in]𝔼delimited-[]subscript𝑖𝑛\mathbb{E}[i_{n}]blackboard_E [ italic_i start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT ]) as a closed-form expression in loop iteration n𝑛nitalic_n using the Prob-solvable loop and evaluate it at n=20𝑛20n=20italic_n = 20. In the bottom right panel, we compare the true and estimated distributions for a fixed iteration (we sample the loop 106superscript10610^{6}10 start_POSTSUPERSCRIPT 6 end_POSTSUPERSCRIPT times at iteration n=20𝑛20n=20italic_n = 20).
In preliminary work presented at QEST 2022 (Kofnov et al., 2022), we provided a solution to this problem leveraging the theory of general Polynomial Chaos Expansion (gPCE) (Xiu and Karniadakis, 2002), which consists of decomposing a non-polynomial random function into a linear combination of orthogonal polynomials. gPCE theory, upon which our approach is based, assures that the polynomial approximation of non-polynomial square-integrable functions converges to the truth by increasing the degree of the polynomial and guarantees the estimation of moments of random variables with complex probability distributions. Once such a polynomial approximation is applied, we take advantage of the work in (Bartocci et al., 2019, 2020) to automatically estimate the moment-based invariants of the loop state variables as closed-form solutions.
Figure 1. The probabilistic loop in the top left panel encodes the Taylor rule (Taylor, 1993), an equation that prescribes a value for the short-term interest rate based on a target inflation rate and the gross domestic product.
A
In this section, we describe the preliminary graph node classification approach based on discrete potential theory after introducing some essential mathematical concepts.
In this paper, we propose a probability-based objective function for semi-supervised node classification that takes advantage of simplicial interactions of varying order. Given that densely connected nodes are likely to have similar properties, our proposed objective function imposes a greater penalty when nodes connected via higher-order simplices have diversified labels. For a given number of distinct labels l𝑙litalic_l, each node is equipped with an l𝑙litalic_l-dimensional probability distribution, and we seek the distribution across all nodes that minimizes the objective function under the constraint that the sum of node probabilities is one. Furthermore, based on the recognition that traditional stochastic block models do not adequately mimic many real datasets, particularly in representing the distribution of higher-order simplices within each cluster, we propose the stochastic block tensor model (SBTM). For each k≥2𝑘2k\geq 2italic_k ≥ 2, the SBTM uses probability parameters to control the number of k𝑘kitalic_k-cliques within or between clusters, adjusting the distribution of higher-order simplices in the network, thus better reflecting real data characteristics. The evaluation of our proposed function was conducted using graphs generated by the stochastic (SBTM) and in integration with graph neural network-based architectures (GAT). In challenging classification scenarios, where the probability of connections within the same label is low, the probability of connections between different labels is high, and there are fewer nodes with known labels, our proposed function integrating higher-order networks outperformed results from simple pairwise interactions or random walk-based probabilistic methods. Especially in imbalanced data, by adjusting the weight parameter within the objective function, further accuracy gains were achieved when the distribution of misclassified nodes was biased towards certain label indices containing more nodes than other labels. This offers potential applications in many node classification problems in network data, when combined with several semi-supervised studies that do not directly use higher-order simplices dispersed in networks. Our suggested objective function, which conducts different calculations depending on the size of the simplices, confronts a computational challenge because uniform operations cannot be applied to all simplices, making GPU-based parallel computing approaches difficult to deploy. Overcoming this limitation is proposed for future research.
In many real-world systems, network interactions are not only pairwise, but involve the joint non-linear couplings of more than two nodes [26]. Here, we fix some terminology on higher-order networks that will be used throughout the paper.
Networks represented by graphs consist of nodes representing entities of the system, and edges depicting their interactions. Such graphical representations facilitate insights into the system’s modular structure or its inherent communities [1, 2]. While traditional graph analysis methods only considered pairwise interaction between nodes, recent researches, including those in social sciences [3] and biochemical systems [4], have experimentally demonstrated that networks in real systems often rely on interactions involving more than two nodes or agents. As a result, to analyze the attributes of a network, it is essential to illuminate the causal interactions of the network using higher-order networks (or hypergraphs) beyond pairwise relationships [5]. There are various approaches to address this point of view, and recent studies are elucidating the relationships between cliques (a subset of nodes such that every two distinct nodes in the clique are adjacent) that form higher-order networks using probabilistic modeling based on the Stochastic Block Model (SBM) [6, 7, 8]. SBM is a generative model for random graphs that includes the following parameters: the number of nodes, the number of disjoint communities to which each node belongs, and the probability of edge connections between each community. The most common form of SBM assumes that the number of nodes in each community and the probability of edge connections within the same community are equal; nevertheless, several modified variants of SBM have also been studied [9, 10].
We also propose a novel graph generation model, Stochastic Block Tensor Model (SBTM). In general, traditional SBM-generated networks differ significantly from many real-world networks. Specifically, when comparing networks of equivalent density (that is, networks with an identical count of nodes and edges), SBM-based models typically exhibit far fewer higher-order polyhedrons (simplices or cliques), than what is observed in real-world graphs. This limitation of SBM stems from its nature as an edge-generation model. For example, in social networks, while two people might form a friendship, it is also possible for three or more individuals to simultaneously establish a friendship. In light of this, we suggest that edge-generation models (SBM) have limits in producing network data that is similar to what is observed in the real world, and we offer a revised model capable of incorporating higher-order structures such as triangles and tetrahedrons into the network.
B
Finally, we note that our work also fits into the literature that leverages the power of ML for causal analysis and policy evaluation. The value added of doubly-robust procedures has been explored in applied works by, for example, Knaus (2022), Bach
Our method for panel data models with individual fixed effects is general and particularly relevant for applied researchers. We provide new estimation tools within the existing DML framework for use on panel data. In doing so, we broaden the reach of DML to a large family of empirical problems for which the time dimension must be properly accounted for. Our focus, in the subsequent development, on the homogeneous treatment effect case is also because such models are widely used by applied researchers, but we also show how our procedures can be extended to heterogeneous treatment effects provided that the analyst is prepared to specify a (finite-dimensional) parametric model for this heterogeneity. More widely, we encourage researchers to use the procedures we propose in place of existing ones, or to test the robustness of their results - based on, say, linear models - to non-linearity.
Naghi (2024a, b). Because panel data are widely used in applied analyses, our proposed procedures for panel data models have the potential to attract the interest of applied researchers from various fields broadening the applicability of DML.
The second approach we consider follows more conventional techniques for panel data by transforming the data to remove entirely the fixed effects from the analysis.
In this paper, we develop and assess novel DML procedures for estimating treatment (or causal) effects from panel data with fixed effects. The procedures we propose are extensions of the correlated random effects (CRE), within-group (WG) and first-difference (FD) estimators commonly used for linear models to scenarios where the underlying model is non-linear. Specifically, these are based on an extension of the partially linear regression (PLR) model proposed by Robinson (1988) to panel data through the inclusion of time-varying predictors and unobserved individual heterogeneity (i.e. individual fixed effects).
B
Global Energy Network Eθg⁢l⁢o⁢b⁢a⁢l⁢(c,y)superscriptsubscript𝐸𝜃𝑔𝑙𝑜𝑏𝑎𝑙𝑐𝑦E_{{\bm{\theta}}}^{global}({\bm{c}},{\bm{y}})italic_E start_POSTSUBSCRIPT bold_italic_θ end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_g italic_l italic_o italic_b italic_a italic_l end_POSTSUPERSCRIPT ( bold_italic_c , bold_italic_y ).
The class and concept energy networks model class labels and concepts separately; in contrast, the global energy network model the global relation between class labels and concepts.
The class energy network learns the dependency between the input and the class label, while the concept energy network learns the dependency between the input and each concept separately. In contrast, our global energy network learns (1) the interaction between different concepts and (2) the interaction between all concepts and the class label.
Our ECBM consists of three energy networks collectively parameterized by 𝜽𝜽{\bm{\theta}}bold_italic_θ: (1) a class energy network E𝜽c⁢l⁢a⁢s⁢s⁢(𝒙,𝒚)superscriptsubscript𝐸𝜽𝑐𝑙𝑎𝑠𝑠𝒙𝒚E_{{\bm{\theta}}}^{class}({\bm{x}},{\bm{y}})italic_E start_POSTSUBSCRIPT bold_italic_θ end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_c italic_l italic_a italic_s italic_s end_POSTSUPERSCRIPT ( bold_italic_x , bold_italic_y ) that measures the compatibility of input 𝒙𝒙{\bm{x}}bold_italic_x and class label 𝒚𝒚{\bm{y}}bold_italic_y, (2) a concept energy network E𝜽c⁢o⁢n⁢c⁢e⁢p⁢t⁢(𝒙,𝒄)superscriptsubscript𝐸𝜽𝑐𝑜𝑛𝑐𝑒𝑝𝑡𝒙𝒄E_{{\bm{\theta}}}^{concept}({\bm{x}},{\bm{c}})italic_E start_POSTSUBSCRIPT bold_italic_θ end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_c italic_o italic_n italic_c italic_e italic_p italic_t end_POSTSUPERSCRIPT ( bold_italic_x , bold_italic_c ) that measures the compatibility of input 𝒙𝒙{\bm{x}}bold_italic_x and the K𝐾Kitalic_K concepts 𝒄𝒄{\bm{c}}bold_italic_c, and (3) a global energy network E𝜽g⁢l⁢o⁢b⁢a⁢l⁢(𝒄,𝒚)superscriptsubscript𝐸𝜽𝑔𝑙𝑜𝑏𝑎𝑙𝒄𝒚E_{{\bm{\theta}}}^{global}({\bm{c}},{\bm{y}})italic_E start_POSTSUBSCRIPT bold_italic_θ end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_g italic_l italic_o italic_b italic_a italic_l end_POSTSUPERSCRIPT ( bold_italic_c , bold_italic_y ) that measures the compatability of the K𝐾Kitalic_K concepts 𝒄𝒄{\bm{c}}bold_italic_c and class label 𝒚𝒚{\bm{y}}bold_italic_y.
To predict 𝒄𝒄{\bm{c}}bold_italic_c and 𝒚𝒚{\bm{y}}bold_italic_y given the input 𝒙𝒙{\bm{x}}bold_italic_x, we freeze the feature extractor F𝐹Fitalic_F and the energy network parameters 𝜽𝜽{\bm{\theta}}bold_italic_θ and search for the optimal prediction of concepts 𝒄^^𝒄\widehat{{\bm{c}}}over^ start_ARG bold_italic_c end_ARG and the class label 𝒚^^𝒚\widehat{{\bm{y}}}over^ start_ARG bold_italic_y end_ARG as follows:
B
]=\int_{\mathcal{X}}|\mathbf{C}_{\nu}(x_{*})|p_{\nu}(x_{*}|\mathbf{y})dx_{*}.caligraphic_H ( italic_ν ) = blackboard_E start_POSTSUBSCRIPT italic_p start_POSTSUBSCRIPT italic_ν end_POSTSUBSCRIPT end_POSTSUBSCRIPT [ | bold_C start_POSTSUBSCRIPT italic_ν end_POSTSUBSCRIPT ( ⋅ ) | ] = ∫ start_POSTSUBSCRIPT caligraphic_X end_POSTSUBSCRIPT | bold_C start_POSTSUBSCRIPT italic_ν end_POSTSUBSCRIPT ( italic_x start_POSTSUBSCRIPT ∗ end_POSTSUBSCRIPT ) | italic_p start_POSTSUBSCRIPT italic_ν end_POSTSUBSCRIPT ( italic_x start_POSTSUBSCRIPT ∗ end_POSTSUBSCRIPT | bold_y ) italic_d italic_x start_POSTSUBSCRIPT ∗ end_POSTSUBSCRIPT .
This functional is a weighted integrated Mean Squared Prediction Error (weighted IMSPE). The weight is the posterior density, which focuses the attention on the region of interest for the inverse problem.
We are interested in the variance integrated over the posterior distribution. The quantity of interest Dnsubscript𝐷𝑛D_{n}italic_D start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT is given by:
Based on these results, one could argue that the CSQ strategy can be situationally better as it is easier to set up while providing similar performance in the end. However, two counter-arguments can be pointed out. First of all, the IP-SUR strategy does exhibit a guarantee for the convergence of the integrated variance, which offers a strong theoretical foundation. Besides, though the acquisition function in the IP-SUR strategy is more computationally intensive, the method does not rely on the prior setting of an arbitrary hyperparameter, while the CSQ design is based on the hyperparameter h∈ℝ+ℎsubscriptℝh\in\mathbb{R}_{+}italic_h ∈ blackboard_R start_POSTSUBSCRIPT + end_POSTSUBSCRIPT introduced in the definition of the bounding set ℬn⁢(h)subscriptℬ𝑛ℎ\mathcal{B}_{n}(h)caligraphic_B start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT ( italic_h ) in Equation (7). This hyperparameter quantifies how far away from the MAP the optimization problem can search. The choice of this hyperparameter can impact quite drastically the performance of the sequential design. A lower value leads to a more constraint set and thus an easier optimization problem, however, the new design points are confined to a smaller region of the parameter space.
This work presents two new sequential design strategies to build efficient Gaussian process surrogate models in Bayesian inverse problems. These strategies are especially important for cases where the posterior distribution in the inverse problem has thin support or is high-dimensional, in which case space-filling designs are not as competitive. The IP-SUR strategy introduced in this work is shown to be tractable and is supported by a theoretical guarantee of almost sure convergence of the weighted integrated mean square prediction error to zero. This method is compared to a simpler CSQ strategy which is adapted from D-optimal designs and to a strategy based on the minimization of the Bayes risk with respect to the variance of the likelihood estimate. While both methods perform better than D-optimal and I-optimal strategies, the IP-SUR method seems to provide better performance than CSQ for higher dimensions while not relying on the choice of a hyperparameter, all the while being grounded on strong theoretical foundations. It is also comparable to the Bayes risk minimization for all test cases and even superior for the bimodal test case. The latter strategy also does not display a convergence guarantee.
A
Control of the familywise error rate at a fixed level no longer works for ancestor regression. However, there is still a separation between ancestors and non-ancestors in terms of the effect size. For long time series, there is a sweet spot with high power at a fairly low error rate. Hence, the ordering of the p-values still gives some indication of what could be the true ancestors. Some of the curves do not start at 00 power and error rate as there are p-values that are numerically equal to 00 such that no reasonable ordering can be made for these observations.
For non-ancestors, the observed average of the absolute z-statistics is close to the theoretical mean under the asymptotic null distribution as desired. On the right-hand side, we see that we can control the type I error at the desired level for every sample size. As expected, the power to detect ancestors increases with larger sample sizes. However, driven by the last group of ancestors discussed above, there are still some undetected ancestors for T=106𝑇superscript106T=10^{6}italic_T = 10 start_POSTSUPERSCRIPT 6 end_POSTSUPERSCRIPT. For the other groups, we obtain almost perfect power. One could then also infer these missed effects by recursive arguments. We discuss this for the case of networks below.
Additionally, we show on the left side the performance of the LiNGAM algorithm as described in Hyvärinen et al., (2010). For this, we use the code published together with Moneta et al., (2013). As the LiNGAM algorithm by default does not search for sparse estimates 𝐁^τsubscript^𝐁𝜏\hat{\mathbf{B}}_{\tau}over^ start_ARG bold_B end_ARG start_POSTSUBSCRIPT italic_τ end_POSTSUBSCRIPT for τ>0𝜏0\tau>0italic_τ > 0, a straightforward comparison is only possible for instantaneous effects. We note that the LiNGAM algorithm leads to higher power, especially for low sample sizes. However, it does not allow for an interpretable error control at a predefined level. Similar results were obtained for the comparison in the i.i.d. case, see Schultheiss and Bühlmann, (2023).
Control of the familywise error rate at a fixed level no longer works for ancestor regression. However, there is still a separation between ancestors and non-ancestors in terms of the effect size. For long time series, there is a sweet spot with high power at a fairly low error rate. Hence, the ordering of the p-values still gives some indication of what could be the true ancestors. Some of the curves do not start at 00 power and error rate as there are p-values that are numerically equal to 00 such that no reasonable ordering can be made for these observations.
At our target level α=0.05𝛼0.05\alpha=0.05italic_α = 0.05, the power remains comparable to the case without hidden variables or is even increased for some sample sizes. The unobserved variables are mainly a problem for error control as the assumptions are not fulfilled but not for detection per se. Similarly, the power of the LiNGAM algorithm remains high while the error rate is increased.
D
Figure 2(a) and 2(b) show the decision boundary of the model for the former and later cases respectively.
Figure 6 demonstrates how the scaling factor (α𝛼\alphaitalic_α) influences test accuracy for models trained on datasets with 25%percent2525\%25 % label corruption. We observe that increasing α𝛼\alphaitalic_α generally improves test accuracy up to a certain point, after which accuracy gradually declines.
The noise transition matrix 𝒯𝒯\mathcal{T}caligraphic_T is a square matrix of size K×K𝐾𝐾K\times Kitalic_K × italic_K, where K𝐾Kitalic_K is the number of classes, which captures the conditional probability distribution of label corruption.
We leverage the cross-entropy loss, denoted by ℒℒ\mathcal{L}caligraphic_L, of the trained model with parameters θ∗subscript𝜃\theta_{*}italic_θ start_POSTSUBSCRIPT ∗ end_POSTSUBSCRIPT.
We observe that, with label corruption, the test accuracy of the model drops by 4.8%percent4.84.8\%4.8 % from the model trained without any label corruption.
D
Suppose that T≫k2⁢log3⁡Tmuch-greater-than𝑇superscript𝑘2superscript3𝑇T\gg k^{2}\log^{3}Titalic_T ≫ italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT roman_log start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT italic_T.
where C1>0subscript𝐶10C_{1}>0italic_C start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT > 0 is some sufficiently large universal constants. Then
there exists some x′superscript𝑥′x^{\prime}italic_x start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT in 𝒩εsubscript𝒩𝜀\mathcal{N}_{\varepsilon}caligraphic_N start_POSTSUBSCRIPT italic_ε end_POSTSUBSCRIPT such that
Then there exists some universal constant C5>0subscript𝐶50C_{5}>0italic_C start_POSTSUBSCRIPT 5 end_POSTSUBSCRIPT > 0 such that, for
constant cR>0subscript𝑐𝑅0c_{R}>0italic_c start_POSTSUBSCRIPT italic_R end_POSTSUBSCRIPT > 0 such that
C
In order to circumvent parametric modeling assumptions, we propose a unification of the generative model and the inference process. Building upon the point estimators, we define a distinct variational family over global and local parameters for a fully Bayesian treatment of all variables.
We show how partial states and re-sampled indices generated by Smc can be interpreted as auxiliary random variables within a pseudo-marginal framework, thus establishing connections between variational pseudo-marginal methods and Vsmc (Naesseth et al., 2018; Moretti et al., 2021).
Pseudo-marginal methods are a class of statistical techniques used to approximate difficult-to-compute probabilities, typically by introducing auxiliary random variables to form an unbiased estimate of the target probability (Andrieu & Roberts, 2009).  Beaumont (2003) introduced a method in genetics to sample genealogies in a fully Bayesian framework. Tran et al. (2016) utilizes pseudo-marginal methods to perform variational Bayesian inference with an intractable likelihood. Our work is a synthesis of Wang et al. (2015) and Moretti et al. (2021) in that we introduce a variational approximation on topologies using Smc and a VI framework to learn parameters.
A recent body of research has melded variational inference (VI) and sequential search. These connections are realized through the development of a variational family for hidden Markov models, employing Sequential Monte Carlo (Smc) as the marginal likelihood estimator (Maddison et al., 2017; Naesseth et al., 2018; Le et al., 2018; Moretti et al., 2019; 2020; 2021). Within the field of Bayesian phylogenetics (the study of evolutionary histories), various methods have been proposed for inference on tree structures. Common approaches include local search algorithms like random-walk Mcmc (Ronquist et al., 2012) and sequential search algorithms like Combinatorial Sequential Monte Carlo (Csmc) (Bouchard-Côté et al., 2012; Wang et al., 2015). Mcmc methods also handle model learning.  Dinh et al. (2017) proposes ppHmc which extends Hamiltonian Monte Carlo to phylogenies. Evaluating the likelihood term in Mcmc acceptance ratios can be challenging. As a workaround, particle Mcmc (Pmcmc) algorithms use Smc to estimate the marginal likelihood and define Mcmc proposals for parameter learning (Wang & Wang, 2020).
Section 3.1 adapts the Csmc approach to perform inference on jet tree structures. Section 3.2 reformulates Vcsmc for inference on global parameters. Section 3.2.1 utilizes Vcsmc methodology to learn parameters as point estimates. Section 3.2.2 defines a prior on the model parameters to construct a variational approximation on both global and local parameters. The resulting approach is interpreted as a variational pseudo-marginal method establishing connections between pseudo-marginal methods (Andrieu & Roberts, 2009) and Variational Combinatorial Sequential Monte Carlo (Naesseth et al., 2018; Moretti et al., 2021).
A
However, in less extreme cases, determining the need for zero-inflation versus an appropriate choice of block structure becomes essential.
Furthermore, Dong et al. [15] and Motalebi et al. [32] specifically focused on adapting stochastic block models to account for excess zeroes, underscoring the importance of accurately modelling sparsity for realistic network analysis.
To address this, appropriate likelihood-ratio tests and model comparison techniques have been developed for various models [8, 32, 15, 6].
In red, the expected edge count distribution according to a DCSBM whose blocks have been obtained by modularity maximisation.
Traditional network models, such as the G⁢(N,p)𝐺𝑁𝑝G(N,p)italic_G ( italic_N , italic_p ) [17, 22], configuration models [9, 20, 7], and stochastic block models [26, 36], have been instrumental in advancing our understanding of complex networks.
B
Hollander (2024)]. The fact that we can also identify the second-order asymptotics of order n𝑛nitalic_n allows us to prove a large deviation principle for the number of edges in the graph (in Theorem 2.3), as well as prove that most triangles are actually vertex disjoint (in Theorem 2.2), which would not have been possible with the first-order result only. This is a reflection of the fact that the key large deviation rate is n𝑛nitalic_n, not n⁢log⁡n𝑛𝑛n\log nitalic_n roman_log italic_n, as one might have conjectured after [Chakraborty, van der Hofstad and den
In practice, we can only observe a large network without knowing its full architecture. From the modeling perspective it is important to be able to estimate unknown parameter(s) from observations. In this section, we show that it is possible to consistently estimate the parameters in the exponential random graph in (3.1), with β=13+θlog⁡n𝛽13𝜃𝑛\beta=\tfrac{1}{3}+\frac{\theta}{\log n}italic_β = divide start_ARG 1 end_ARG start_ARG 3 end_ARG + divide start_ARG italic_θ end_ARG start_ARG roman_log italic_n end_ARG. Note that the distribution of the graph is characterized by two parameters: θ𝜃\thetaitalic_θ and λ𝜆\lambdaitalic_λ. The estimation procedure is a by product of Theorem 3.1, and is stated in the following theorem:
The type of models we investigated may be extended as well. We focussed on the number of vertices in triangles, but it would be natural to consider the number of edges in triangles instead. Since this number can vary much more (the number is at most n⁢(n−1)/2𝑛𝑛12n(n-1)/2italic_n ( italic_n - 1 ) / 2 rather than n𝑛nitalic_n, as for the number of vertices in triangles), it appears to be a significantly more difficult problem. Finally, of course, we could extend the number of parameters in our model, and investigate the behaviour of the associated exponential random graph. In what generality can the parameters still be consistently estimated?
The above results in turn allowed us to suggest a range of sparse exponential random graph models, which is important because it is hard to identify sparse exponential random graph models with many triangles. Finally, our results allowed us to prove that the parameters of the model can be consistently estimated, a property that is relevant in practice.
This paper is organised as follows. In Section 2, we estimate the second-order of the large-deviation probabilities of the rare event that a sparse Erdős–Rényi random graph has a linear number of vertices in triangles, study the structure of the graph conditionally on this rare event, and provide proofs for our main results. In Section 3, we use these results, as well as the key insights developed in their proofs, to study exponential random graphs based on the number of vertices in triangles. We show that, for appropriate parameter choices, such models are sparse, i.e., lead to sparse exponential random graphs. In Section 4, we show how our main results can be used to consistently estimate the exponential random graph parameters. We close in Section 5 with a discussion and a list of open problems.
C
By contrast, normalizing flow (NF) models [14, 15] work by applying a series of bijective transformations to a simple base distribution (usually uniform or Gaussian) to deterministically convert samples to a desired target distribution. While NFs have been successfully used for posterior approximation [16, 17, 18, 19, 20] and produce higher-quality samples, the requirement that the Jacobian of each transformation be simple to compute often requires a high number of transformations and, traditionally, these transformations do not alter the the dimensionality of their inputs, resulting in latent spaces with thousands of dimensions. More recent lines of work on injective flow models 21, 22, 23, 24, 25 address this limitation by allowing practitioners to use flows to learn lower dimensional manifolds from data, but most compression-capable flow models still fail to reach high generative performance on key benchmark image datasets (cf. [23]).
More recently, diffusion-based models (DBMs) [26, 27, 28, 29, 30, 31, 32, 33] have been shown to achieve state-of-the-art results in several generative tasks, including image, sound, and text-to-image generation. These models work by stipulating a fixed forward noising process (e.g., a forward stochastic differential equation (SDE)), wherein Gaussian noise is incrementally added to samples of the target data distribution until all information in the original data is degraded. To generate samples from the target distribution, one then needs to simulate the reverse de-noising process (reverse SDE [34]) which requires knowledge of the score of the intermediate “noised” transitional densities. Estimation of this score function across multiple noise levels is the key component of DBM model training, typically using a de-noising score matching objective [35, 28, 30]. Yet, despite their excellent performance as generative models, DBMs, unlike VAEs or flows, do not readily lend themselves to inference. In particular, because DBMs use a diffusion process to transform the data distribution, they fail to preserve local structure in the data (Figure 1), and uncertainty under this mapping is high at its endpoint because of continuous noise injection and resultant mixing. Moreover, because the final distribution—Gaussian white noise of the same dimension—must have higher entropy than the original data, there is no data compression.
Limitations: One limitation of our model is its reliance on the participation ratio (7) as a measure of dimensionality. Because PR relies only on second-order statistics and our proposals (9) are formulated in the data eigenbasis, our method tends to favor the top principal components of the data when reducing dimension. However, as noted above, this is not simply a truncation to the lowest principal components, since dimensions still mix via coupling to the score function in (6). Nonetheless, solutions to the condition (8) that preserve (or reduce) more complex dimensionality measures might lead to even stronger compressions for curved manifolds (Appendix C.2.2), and more sophisticated choices for noise and rescaling schedules in (6) might lead to compressions that do not simply remove information along fixed axes, more similar to [66]. That is, we believe much more interesting classes of flows are possible. A second limitation is that mentioned in Section 3.2 and in our experiments: our schedule requires training DBMs over much larger ranges of noise than are typically used, and this results in noticeable tradeoffs in compression performance as the inflation gap and number of preserved dimensions are varied.
Specifically, our contributions are: First, focusing on the case of unconditional generative models, we show how a previously established link between the SDE defining diffusion models and the probability flow ODE (pfODE) that gives rise to the same Fokker-Planck equation [30] can be used to define a unique, deterministic map between the original data and an asymptotically Gaussian distribution. This map is bidirectional, preserves local neighborhoods, and has controllable numerical error, making it suitable for rigorous uncertainty quantification. Second, we define two classes of flows that correspond to novel noise injection schedules in the forward SDE of the diffusion model. The first of these preserves a measure of dimensionality, the participation ratio (PR) [48], based on second-order data statistics, preventing an effective increase in data dimensionality with added noise, while the second flow reduces PR, providing data compression. We demonstrate experimentally that inflationary flows indeed preserve local neighborhood structure, allowing for sampling-based uncertainty estimation, and that these models continue to provide high-quality generation under compression, even from latent spaces reduced to as little as 0.03% of the nominal data dimensionality. As a result, inflationary flows offer excellent generative performance while affording data compression and accurate uncertainty estimation for scientific applications.
Figure 1: SDE-ODE Duality of diffusion-based models. The forward (noising) SDE defining the DBM (left) gives rise to a sequence of marginal probability densities whose temporal evolution is described by a Fokker-Planck equation (FPE, middle). But this correspondence is not unique: the probability flow ODE (pfODE, right) gives rise to the same FPE. That is, while both the SDE and the pfODE possess the same marginals, the former is noisy and mixing while the latter is deterministic and neighborhood-preserving. Both models require knowledge of the score function ∇𝐱log⁡pt⁢(𝐱)subscript∇𝐱subscript𝑝𝑡𝐱\nabla_{\mathbf{x}}\log p_{t}(\mathbf{x})∇ start_POSTSUBSCRIPT bold_x end_POSTSUBSCRIPT roman_log italic_p start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT ( bold_x ), which can learned by training either model.
A
Simultaneous estimation of multiple quantiles is asymptotically more efficient than separate estimation of individual regression quantiles or ignoring within-subject dependency (Cho, Kim, and Kim 2017). However, this approach does not guarantee non-crossing quantiles, which can affect the validity of the predictions and introduce critical issues in certain scenarios. To address this limitation, research on non-crossing multiple quantile regression has gained attention in recent years, with several methods proposed to ensure non-crossing quantile estimates, including stepwise approaches (Liu and Wu 2009), non-parametric techniques (Cannon 2018), and deep learning-based models (Moon et al. 2021; Brando et al. 2022).
Most existing conformal methods for regression either directly predict the lower and upper endpoints of the interval using quantile regression models (Romano, Patterson, and Candès 2019; Kivaranovic, Johnson, and Leeb 2020; Sesia and Candès 2020; Gupta, Kuchibhotla, and Ramdas 2022) or first estimate the full conditional distribution of the response and then invert it to obtain prediction sets (Izbicki, Shimizu, and Stern 2020a; Chernozhukov, Wüthrich, and Zhu 2021). While these approaches perform well in many situations, they may produce sub-optimal prediction sets if the conditional distribution is skewed. Conformal quantile regression typically yields equal-tailed intervals, but the shortest valid interval may be unbalanced. On the other hand, density-based methods can adapt to skewness but typically involve many tuning parameters and more difficult interpretation, which can be complex for practitioners.
Our proposed method for constructing non-convex prediction sets is related to the work of (Izbicki, Shimizu, and Stern 2022), who introduce a profile distance to measure the similarity between features and construct prediction sets based on neighboring samples.
Two advanced methods, Conformal Quantile Regression (CQR) (Romano, Patterson, and Candès 2019) and Conformal Histogram Regression (CHR) (Sesia and Romano 2021), extend this framework:
However, the validity of the produced intervals is only guaranteed for specific models under certain regularity and asymptotic conditions (Steinwart and Christmann 2011; Takeuchi et al. 2006; Meinshausen 2006). Many related methods for constructing valid prediction intervals can be encompassed within the nested conformal prediction framework, where a nested sequence of prediction sets is generated by thresholding nonconformity scores derived from various approaches, such as residual-based methods (Papadopoulos et al. 2002; Balasubramanian, Ho, and Vovk 2014; Lei et al. 2018), quantile regression (Romano, Patterson, and Candès 2019; Kivaranovic, Johnson, and Leeb 2020; Sesia and Candès 2020; Chernozhukov, Wüthrich, and Zhu 2021), density estimation (Izbicki, Shimizu, and Stern 2020b; Sesia and Romano 2021; Izbicki, Shimizu, and Stern 2022), and their combinations with ensemble methods (Gupta, Kuchibhotla, and Ramdas 2022) and localized methods (Papadopoulos, Gammerman, and Vovk 2008; Colombo 2023; Luo and Colombo 2024). However, as noted by (Lei, Robins, and Wasserman 2013), the optimal conditionally-valid prediction regions are level sets of conditional densities, which need not be intervals, suggesting that constructing possibly non-convex prediction sets might lead to more efficient conformal predictors.
D
For these five datasets, the left plots present the desired miscoverage rate α𝛼\alphaitalic_α versus the true coverage rate. The closer the curve aligns with the line 1−α1𝛼1-\alpha1 - italic_α, the easier it is for the method to achieve the desired coverage. Our proposed method, CIA, is very close to the line 1−α1𝛼1-\alpha1 - italic_α. There is a small gap in the result of the Anaheim dataset. This result aligns with Theorem 2. Compared to the Chicago dataset, Anaheim is a smaller traffic network. The probability of two shortest paths overlapping each other can affect the validity. This issue is much less obvious in a larger traffic network like Chicago. The quantity δ𝛿\deltaitalic_δ from the theorem, along with the coverage gap, is provided in the appendix. Recall that the group-sample conformal method is proposed in Section 3. This is also a conformal prediction method, so its validity is also close to the desired line 1−α1𝛼1-\alpha1 - italic_α in the results of Figure 2. However, the validity is much less reliable in the results of Figure 3. This is because edge weights from random samples are not exchangeable with the edge weights on the shortest path. The method of Bonferroni correction has good validity in the example of the community dataset. The reason is that under the current setting of calibration and test set ratio, each subset contains one or two samples, which is very close to the case of classical conformal prediction. In other datasets, Bonferroni correction has too high coverage. For normal confidence intervals, since the assumption is not true in general, the coverage is much lower than the desired one. In conclusion, our proposed method has the most reliable validity in all datasets that we have studied.
This gap is particularly relevant in applications such as transductive conformal prediction on traffic networks. For example, existing Graph Neural Network (GNN) methods can predict the label of each road, where the label can be considered as the cost of traversing that road. This problem has been studied in (Huang et al. 2024; Zargarbashi, Antonelli, and Bojchevski 2023; Zhao, Kang, and Cheng 2024). Conformalized GNN can output a prediction set of each edge’s label, but the cost of a route, which is the sum of the labels of the edges on the route, cannot be directly obtained by applying conformal prediction to individual edges. This challenge arises from two main aspects. First, the sum or average of random variables involves the convolution of the density function, and simple interval arithmetic, such as adding up the lower and upper bounds, cannot provide a confidence interval with the desired coverage. Second, the coverage of each confidence interval is in a marginal sense, so the coverage of two labels from two confidence intervals are dependent events. Consequently, it is difficult to use the conformal prediction set for a single label to devise a prediction set for multiple labels. To address this critical gap in the current literature on conformal prediction and expand its applicability to a broader range of uncertainty quantification problems, we introduce the method Conformal Interval Arithmetic (CIA), specifically designed to estimate the average or other symmetric functions of unknown labels over a certain index set, demonstrating the usefulness of our problem setting in many applications where people are interested in obtaining estimates about multiple labels.
Our main proposed method can be described as follows. The core idea is to establish a confidence interval using the exchangeability of the groups of indices. This method involves finding the absolute value of the difference between the sum of labels in a group and its prediction, or absolute residual, and using this as the score function for split conformal prediction. Assuming the groups of indices are exchangeable, we can provide prediction sets with valid marginal coverage. However, this approach cannot handle the case when the calibration samples and the test samples belong to different groups. To address this issue, we introduce a new split conformal prediction method called symmetric calibration, which creates a scenario in which the calibration set and the test set play symmetric roles. At the group level, each index within the same group has equal chance of being assigned to either a calibration sample or a test sample. This ensures that the residual from the sums of calibration samples and the sums of test samples have identical distributions. By leveraging the exchangeability of the groups of samples in the calibration set, we can devise a conformal prediction set for the sum of residuals of calibration samples. Since both the calibration samples and the test samples share the same distribution, this conformal prediction set is also valid for the sum of the test samples.
Under the choices of α𝛼\alphaitalic_α on the left plots, the right plots compare the coverage versus the prediction set size. The lower the curve, the more efficient and informative the prediction set provided by the method. In the plots of Figure 2, CIA is the most efficient in all datasets. In the plots of Figure 3, CIA has similar efficiency to the Group and Normal methods. In conclusion, CIA is the most efficient method with valid coverage.
For these five datasets, the left plots present the desired miscoverage rate α𝛼\alphaitalic_α versus the true coverage rate. The closer the curve aligns with the line 1−α1𝛼1-\alpha1 - italic_α, the easier it is for the method to achieve the desired coverage. Our proposed method, CIA, is very close to the line 1−α1𝛼1-\alpha1 - italic_α. There is a small gap in the result of the Anaheim dataset. This result aligns with Theorem 2. Compared to the Chicago dataset, Anaheim is a smaller traffic network. The probability of two shortest paths overlapping each other can affect the validity. This issue is much less obvious in a larger traffic network like Chicago. The quantity δ𝛿\deltaitalic_δ from the theorem, along with the coverage gap, is provided in the appendix. Recall that the group-sample conformal method is proposed in Section 3. This is also a conformal prediction method, so its validity is also close to the desired line 1−α1𝛼1-\alpha1 - italic_α in the results of Figure 2. However, the validity is much less reliable in the results of Figure 3. This is because edge weights from random samples are not exchangeable with the edge weights on the shortest path. The method of Bonferroni correction has good validity in the example of the community dataset. The reason is that under the current setting of calibration and test set ratio, each subset contains one or two samples, which is very close to the case of classical conformal prediction. In other datasets, Bonferroni correction has too high coverage. For normal confidence intervals, since the assumption is not true in general, the coverage is much lower than the desired one. In conclusion, our proposed method has the most reliable validity in all datasets that we have studied.
C
=1:4,j=1:5italic_I ( bold_x [ italic_i ] , bold_y [ italic_j ] ) = ∫ ∫ italic_p ( bold_x [ italic_i ] , bold_y [ italic_j ] ) roman_log divide start_ARG italic_p ( bold_x [ italic_i ] , bold_y [ italic_j ] ) end_ARG start_ARG italic_p ( bold_x [ italic_i ] ) italic_p ( bold_y [ italic_j ] ) end_ARG , italic_i = 1 : 4 , italic_j = 1 : 5
We then we choose independent GP-based priors for each output 𝐲⁢[j]𝐲delimited-[]𝑗\mathbf{y}[j]bold_y [ italic_j ], which use as features only those inputs that exhibit some influence on the output (‘influential subset’ in Table 2). The mean is chosen as a linear function of the form m⁢(𝐱)=θ⁢vf=θ⁢𝐱⁢[0]𝑚𝐱𝜃vf𝜃𝐱delimited-[]0m(\mathbf{x})=\theta\text{vf}=\theta\mathbf{x}\left[0\right]italic_m ( bold_x ) = italic_θ vf = italic_θ bold_x [ 0 ], where θ𝜃\thetaitalic_θ is learnt via least-squares fit to the training data. This function is interesting as it is highly informative for some outputs (e.g. νe⁢f⁢fsubscript𝜈𝑒𝑓𝑓\nu_{eff}italic_ν start_POSTSUBSCRIPT italic_e italic_f italic_f end_POSTSUBSCRIPT) and less so for others (e.g. ce⁢f⁢fsubscript𝑐𝑒𝑓𝑓c_{eff}italic_c start_POSTSUBSCRIPT italic_e italic_f italic_f end_POSTSUBSCRIPT), as illustrated in Fig. 11 (a), so it allows a thorough study of the benefits of using an informative prior. With respect to the prior uncertainty, each GP prior uses a multivariate RBF kernel 0.2⁢exp⁡(−∥𝐱~−𝐱~′∥222⁢(0.8)2)0.2superscriptsubscriptdelimited-∥∥~𝐱superscript~𝐱′222superscript0.820.2\exp{\left(-\frac{\left\lVert\tilde{\mathbf{x}}-\tilde{\mathbf{x}}^{\prime}%
Table 2: Materials surrogate modeling: sensitivity analysis. Mutual information I⁢(𝐱⁢[i],𝐲⁢[j])𝐼𝐱delimited-[]𝑖𝐲delimited-[]𝑗I(\mathbf{x}[i],\mathbf{y}[j])italic_I ( bold_x [ italic_i ] , bold_y [ italic_j ] ) (in nats) is computed using the scikit-learn package [38].
To design the functional prior density p⁢(g)𝑝𝑔p(g)italic_p ( italic_g ) in this non-trivial multi-input multi-output example, we first ran a simple sensitivity analysis on the training data to determine if some inputs had negligible influence on some of the outputs. Table 2 shows the mutual information between each input 𝐱⁢[i]𝐱delimited-[]𝑖\mathbf{x}[i]bold_x [ italic_i ] – output 𝐲⁢[j]𝐲delimited-[]𝑗\mathbf{y}[j]bold_y [ italic_j ] pair, defined as:
The mutual information equates 0 only if input 𝐱⁢[i]𝐱delimited-[]𝑖\mathbf{x}[i]bold_x [ italic_i ] and output 𝐲⁢[j]𝐲delimited-[]𝑗\mathbf{y}[j]bold_y [ italic_j ] are independent, which is the case for some pairs of input-output in this example.
D
In Figure 3(b), for moderate c𝒵subscript𝑐𝒵c_{\mathcal{Z}}italic_c start_POSTSUBSCRIPT caligraphic_Z end_POSTSUBSCRIPT, the choice-only estimator with the weak-preference design outperforms the transductive design (fig. 3(a)), demonstrating that focusing on queries with weak preferences improves estimation. However, as c𝒵subscript𝑐𝒵c_{\mathcal{Z}}italic_c start_POSTSUBSCRIPT caligraphic_Z end_POSTSUBSCRIPT becomes too large, performance declines because many μ˙⁢(x⊤⁢θ∗)˙𝜇superscript𝑥topsuperscript𝜃\dot{\mu}(x^{\top}\theta^{*})over˙ start_ARG italic_μ end_ARG ( italic_x start_POSTSUPERSCRIPT ⊤ end_POSTSUPERSCRIPT italic_θ start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT ) in 6 of algorithm 1 approach zero, preventing informative queries from being sampled. This advantage of the weak-preference design assumes perfect knowledge of θ∗superscript𝜃\theta^{*}italic_θ start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT and equal resource consumption across queries. In practice, where θ∗superscript𝜃\theta^{*}italic_θ start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT is unknown and weak-preference queries require longer response times, the transductive design performs better, as shown in section 5.2.
\neq z^{*}]blackboard_P [ start_OPERATOR roman_arg roman_max end_OPERATOR start_POSTSUBSCRIPT italic_z ∈ caligraphic_Z end_POSTSUBSCRIPT italic_z start_POSTSUPERSCRIPT ⊤ end_POSTSUPERSCRIPT over^ start_ARG italic_θ end_ARG ≠ italic_z start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT ], for three GSE variations, shown as functions of the arm scaling factor c𝒵subscript𝑐𝒵c_{\mathcal{Z}}italic_c start_POSTSUBSCRIPT caligraphic_Z end_POSTSUBSCRIPT and barrier a𝑎aitalic_a. Darker colors indicate better estimation. (a) The choice-only estimator θ^CHsubscript^𝜃CH\widehat{\theta}_{\text{CH}}over^ start_ARG italic_θ end_ARG start_POSTSUBSCRIPT CH end_POSTSUBSCRIPT with the transductive design λtranssubscript𝜆trans\lambda_{\text{trans}}italic_λ start_POSTSUBSCRIPT trans end_POSTSUBSCRIPT struggles as c𝒵subscript𝑐𝒵c_{\mathcal{Z}}italic_c start_POSTSUBSCRIPT caligraphic_Z end_POSTSUBSCRIPT increases (i.e., preferences become stronger), highlighting that choices from queries with strong preferences provide limited information. (b) The weak-preference design λweaksubscript𝜆weak\lambda_{\text{weak}}italic_λ start_POSTSUBSCRIPT weak end_POSTSUBSCRIPT improves (a) by sampling queries with weak preferences but assumes perfect knowledge of θ∗superscript𝜃\theta^{*}italic_θ start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT and equal resource consumption across queries. (c) The choice-decision-time estimator θ^CH,DTsubscript^𝜃CH,DT\widehat{\theta}_{\text{CH,DT}}over^ start_ARG italic_θ end_ARG start_POSTSUBSCRIPT CH,DT end_POSTSUBSCRIPT with λtranssubscript𝜆trans\lambda_{\text{trans}}italic_λ start_POSTSUBSCRIPT trans end_POSTSUBSCRIPT outperforms both choice-only methods in (a) and (b), showing that decision times complement choices and improve estimation, especially for strong preferences.
To address these challenges, we propose a computationally efficient method for estimating linear human utility functions from both choices and response times, grounded in the difference-based EZ diffusion model [67, 8]. Our method leverages response times to transform binary choices into richer continuous signals, framing utility estimation as a linear regression problem that aggregates data across multiple pairs of options. We compare our estimator to traditional logistic regression methods that rely solely on choices [3, 31]. For queries with strong preferences, our theoretical and empirical analyses show that response times complement choices by providing additional information about preference strength. This significantly improves utility estimation compared to using choices alone. For queries with weak preferences, response times add little value but do not degrade performance. In summary, response times complement choices, particularly for queries with strong preferences.
Figure 3(c) shows that the choice-decision-time estimator consistently outperforms the choice-only estimators under both the transductive and weak-preference designs, particularly for strong preferences. This suggests that for queries with strong preferences, decision times complement choices and improve estimation, confirming our theoretical insights from section 3, while for queries with weak preferences, decision times add little value but do not degrade performance.
In fixed-budget best-arm identification, our choice-decision-time estimator’s ability to extract more information from queries with strong preferences is especially valuable. Bandit learners, such as GSE [3], strategically sample queries, update estimates of θ∗superscript𝜃\theta^{*}italic_θ start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT, and eliminate lower-utility arms. With the choice-only estimator, learners struggle to extract information from queries with strong preferences. To resolve this, one approach is to selectively sample queries with weak preferences, but this has two drawbacks. First, queries with weak preferences take longer to answer (i.e., require more resources), potentially lowering the ‘bang per buck’ (information per resource) [4]. Second, since θ∗superscript𝜃\theta^{*}italic_θ start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT is unknown in advance, learners cannot reliably target queries with weak preferences. In contrast, with our choice-decision-time estimator, learners leverage decision times to gain more information from queries with strong preferences, improving bandit learning performance. We integrate both estimators into bandit learning in section 4 and evaluate their performance in section 5.
C
Likewise, using Buffon’s needle experiment, we demonstrated that the QRNG results pass the t-test (hypothesis: mean = π𝜋\piitalic_π, data normally distributed) for sample sizes up to 4.54×4.54\times4.54 × larger than those achieved by the parallel PRNG, thus resulting in a ∼2×\sim 2\times∼ 2 × better approximation of π𝜋\piitalic_π. If the limit is relaxed to 1/N1𝑁1/\sqrt{N}1 / square-root start_ARG italic_N end_ARG, QRNG permits up to 3.16×3.16\times3.16 × larger samples, resulting in an up to 2.9×2.9\times2.9 × better approximation. In the process, we have demonstrated the capability of this method to distinguish the effects of points-per-sample M𝑀Mitalic_M and sample number N𝑁Nitalic_N, which is not commonly found in the literature. We have also shown that the use of a QRNG results in better approximations (lower approximation errors) for sub-optimal parameter choices (N≫Mmuch-greater-than𝑁𝑀N\gg Mitalic_N ≫ italic_M), enabling large reductions (∼8×\sim 8\times∼ 8 ×) of the number of samples without loss of accuracy. Finally, we demonstrated that comparisons of the effects of QRNGs and PRNGs on MC simulations should be performed with industry-standard PRNGs (e.g. Mersenne Twister or recommended pPRNGs).
Comparing a self-certifying quantum random number generator (c.f. fig. 1A and Supplementary Materials SM sec. A.1.2) to industry-standard PRNGs (SM sec. A.1.1), we demonstrate that the QRNG leads to better approximations than the PRNGs for both methods. We show that the results obtained with the QRNG pass the sign test and t-tests for larger sample sizes than the PRNG: on the one hand, this permits a reduction in sample size while maintaining the same accuracy as achieved by the PRNG. On the other hand, a better approximation of π𝜋\piitalic_π is achieved when considering the same number of samples.
50×1005010050\times 10050 × 100 sets of 1000 points, each of which is defined by a single-precision number pair representing the x𝑥xitalic_x- and y𝑦yitalic_y-coordinates. Although the direct application of N⁢N𝑁𝑁N\!Nitalic_N italic_N measure does not illustrate any statistically significant difference, the measures derived from N⁢N𝑁𝑁N\!Nitalic_N italic_N lead to the following observations: the means and medians of the quality measure Q𝑄Qitalic_Q and Rn⁢rsubscript𝑅𝑛𝑟R_{nr}italic_R start_POSTSUBSCRIPT italic_n italic_r end_POSTSUBSCRIPT are significantly higher for QRNG as compared to PRNG but remain below 1, suggesting that QRNG offer a better dispersion of values, indicating a tendency towards more randomness. The observation that the Coefficient of Variation (CVsubscript𝐶𝑉C_{V}italic_C start_POSTSUBSCRIPT italic_V end_POSTSUBSCRIPT) is smaller for QRNG than for PRNG, despite no statistically significant difference observed in N⁢N𝑁𝑁N\!Nitalic_N italic_N distances, fig. 5, indicates that while both types of generators produce uniformly distributed numbers, QRNG does so with consistency and less variability more closely matching theoretical expectations. The statistically significant differences are observed at a 95%percent9595\%95 % confidence level between the sets of single-precision floating-point numbers generated by QRNG and PRNG, as determined by the Wilcoxon, Student’s T-test, Anderson-Darling, and Kolmogorov-Smirnov tests, and shown in tbl. 1.
Additionally, based on a uniformity analysis, we assessed differences in the random sampling underpinning the MC simulations: our findings suggest that the QRNG, especially at small sample sizes, offers a better dispersion of samples than the PRNG indicating a tendency of the QRNG towards more uniformly distributed samples, closely matching theoretical expectation.
In this work, we assessed the effect of various entropy sources on the outcomes of stochastic simulations. Herein, we assembled a test suite based on Monte Carlo simulations, and a palette of statistical tests, with varying underlying assumptions, to compare a quantum random number generator (QRNG) to pseudo-random number generators (PRNGs) in serial and parallel processes. Stochastic estimation of π𝜋\piitalic_π was chosen due to its simplicity and availability of an exact solution. Using the simple π/4𝜋4\pi/4italic_π / 4 estimate (method A) and testing the hypothesis that the median of the data is π𝜋\piitalic_π, we observed a rejection after ∼1.14⋅106similar-toabsent⋅1.14superscript106\sim 1.14\cdot 10^{6}∼ 1.14 ⋅ 10 start_POSTSUPERSCRIPT 6 end_POSTSUPERSCRIPT samples when utilising the MT PRNG. In contrast the QRNG passed the test even at 2.8⋅106⋅2.8superscript1062.8\cdot 10^{6}2.8 ⋅ 10 start_POSTSUPERSCRIPT 6 end_POSTSUPERSCRIPT samples. We thus conclude that a higher accuracy can be achieved when using the QRNG.
C
Reinforcement learning from human feedback is extensively utilized to align large language models with human preferences (Bai et al., 2022; Ramamurthy et al., 2023; Xiao et al., 2024; Liu et al., 2024). The established pipeline for LLM alignment via RLHF involves three essential steps using a pretrained LLM (Ouyang et al., 2022):
In this section, we formulate our problem as a D𝐷Ditalic_D-optimal design problem, and propose a dual active learning for simultaneous conversation-teacher selection while adhering to the constrained sample budget T𝑇Titalic_T. Following this, we compute a pessimistic policy that leverages the learned reward estimator for fine-tuning.
Supervised fine-tuning (SFT): First, supervised learning is employed to fine-tune the LLM’s parameters, yielding a policy that takes each prompt (e.g., question) as input, and outputs their completion (e.g., response).
Reward learning: Next, we collect a dataset of comparisons, including two completions for each prompt. The ordinal preferences will be provided by human experts to compare these completions. These preferences are then used to train a reward function, which measures the goodness of a given completion for each prompt, via a ranking model, such as the Bradley-Terry-Luce (BTL) model (Bradley and Terry, 1952). Refer to Table 1 for examples of prompt-completion pairs from the Anthropic dataset (Bai et al., 2022).
Reinforcement learning: Finally, an RL algorithm, typically the proximal policy optimization (Schulman et al., 2017), is applied to the prompt-conversation-reward triplets to output the final policy based on the SFT-trained policy and the learned reward function.
B
Using similar arguments as above, the result of Corollary 2.4 follows directly from a straightforward application of Corollary 3.4. However, for other values of α𝛼\alphaitalic_α, Lemma A.2 can also be used to determine the corresponding upper bounds. Specifically, when α=12𝛼12\alpha=\frac{1}{2}italic_α = divide start_ARG 1 end_ARG start_ARG 2 end_ARG, the condition simplifies to λ=CK2𝜆superscriptsubscript𝐶𝐾2\lambda=C_{K}^{2}italic_λ = italic_C start_POSTSUBSCRIPT italic_K end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT. For α∈(12,1)𝛼121\alpha\in\left(\frac{1}{2},1\right)italic_α ∈ ( divide start_ARG 1 end_ARG start_ARG 2 end_ARG , 1 ), the condition becomes λ>CK2𝜆superscriptsubscript𝐶𝐾2\lambda>C_{K}^{2}italic_λ > italic_C start_POSTSUBSCRIPT italic_K end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT, and for α=1𝛼1\alpha=1italic_α = 1, it reduces to CK2=0superscriptsubscript𝐶𝐾20C_{K}^{2}=0italic_C start_POSTSUBSCRIPT italic_K end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT = 0. In all cases, the derived upper bound exhibits a decay with time.
This paper is organized as follows. In Section 2, we state Theorem 2.3, which provides error bounds between the target function fλ,μsubscript𝑓𝜆𝜇f_{\lambda,\mu}italic_f start_POSTSUBSCRIPT italic_λ , italic_μ end_POSTSUBSCRIPT and the approximation obtained via the online regularized algorithm in ℋKsubscriptℋ𝐾\mathcal{H}_{K}caligraphic_H start_POSTSUBSCRIPT italic_K end_POSTSUBSCRIPT along the Markov chain trajectory. In Section 3, we present Theorem 3.3, which establishes error estimates for the optimal solution of a general quadratic loss function in a general Hilbert space using the Markov chain stochastic gradient algorithm. This result forms the foundation for Theorem 2.3. In Section 4, we derive some preliminary results and provide the proof of Theorem 3.3. Building on the previous section, we prove Theorem 2.3 in Section 5 and also Proposition 2.1. Finally, in Section 7, we recall some established results essential for various proofs presented in this paper.
We show that the error bounds depend on an additional factor involving the mixing time tmixsubscript𝑡mixt_{\text{mix}}italic_t start_POSTSUBSCRIPT mix end_POSTSUBSCRIPT of the Markov chain. The resulting error rates for e.g., is of the form 𝒪⁢(tmix⁢t−θ)𝒪subscript𝑡mixsuperscript𝑡𝜃\mathcal{O}\big{(}t_{\text{mix}}t^{-\theta}\big{)}caligraphic_O ( italic_t start_POSTSUBSCRIPT mix end_POSTSUBSCRIPT italic_t start_POSTSUPERSCRIPT - italic_θ end_POSTSUPERSCRIPT ) for θ∈(12,1)𝜃121\theta\in\left(\frac{1}{2},1\right)italic_θ ∈ ( divide start_ARG 1 end_ARG start_ARG 2 end_ARG , 1 ). The mixing time plays a crucial role in determining how quickly a Markov chain converges to its stationary distribution. It ensures that the chain reaches a point where its distribution is sufficiently close to the stationary distribution. This guarantees that the samples drawn from the chain are representative of the stationary distribution. For more details see also [19, Subsection 3.2]. We also provide faster rates of convergence under the assumptions of uniform and geometric ergodicity. In the special case of i.i.d. samples, where tmix=0subscript𝑡mix0t_{\text{mix}}=0italic_t start_POSTSUBSCRIPT mix end_POSTSUBSCRIPT = 0, the upper bound for ℰsamp2⁢(t)subscriptsuperscriptℰ2samp𝑡\mathcal{E}^{2}_{\text{samp}}(t)caligraphic_E start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT start_POSTSUBSCRIPT samp end_POSTSUBSCRIPT ( italic_t ) simplifies to the well-studied case analyzed in [14]. Thus, the i.i.d. setting is a special case of our results.
In this paper, we extend the classical framework of online learning algorithms by relaxing i.i.d. assumption. Instead, we consider samples along a Markov chain trajectory with stationary distribution. Under this setting, we achieve nearly optimal learning rates, introducing an additional factor that depends on the chain’s mixing time. Furthermore, we demonstrate that the i.i.d. case is a special instance of our more general results.
In Theorem 2.3, the decomposition (7) distinguishes between the initial error and the sampling error. The initial error at time step t𝑡titalic_t, denoted by ℰinit⁢(t)subscriptℰinit𝑡\mathcal{E}_{\text{init}}(t)caligraphic_E start_POSTSUBSCRIPT init end_POSTSUBSCRIPT ( italic_t ), arises deterministically and depends on the initial choice, reflecting the accumulated effect from the starting point. In contrast, the sampling error at time step t𝑡titalic_t, ℰsamp2⁢(t)subscriptsuperscriptℰ2samp𝑡\mathcal{E}^{2}_{\text{samp}}(t)caligraphic_E start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT start_POSTSUBSCRIPT samp end_POSTSUBSCRIPT ( italic_t ), depends on the randomness of the sample. Since the samples are generated along the trajectory of a Markov chain, influenced by its initial distribution, an additional factor involving the chain’s mixing time influences the convergence. Consequently, the error rate for ℰsamp2⁢(t)subscriptsuperscriptℰ2samp𝑡\mathcal{E}^{2}_{\text{samp}}(t)caligraphic_E start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT start_POSTSUBSCRIPT samp end_POSTSUBSCRIPT ( italic_t ) takes the form 𝒪⁢(t−θ⁢tmix)𝒪superscript𝑡𝜃subscript𝑡mix\mathcal{O}\big{(}t^{-\theta}t_{\text{mix}}\big{)}caligraphic_O ( italic_t start_POSTSUPERSCRIPT - italic_θ end_POSTSUPERSCRIPT italic_t start_POSTSUBSCRIPT mix end_POSTSUBSCRIPT ), where the mixing time tmixsubscript𝑡mixt_{\text{mix}}italic_t start_POSTSUBSCRIPT mix end_POSTSUBSCRIPT slows down the convergence. A similar analogy also follows in the case of Corollary 2.4 except that in this case the error rate is of the form 𝒪⁢(t−α⁢tmix)𝒪superscript𝑡𝛼subscript𝑡mix\mathcal{O}\big{(}t^{-\alpha}t_{\text{mix}}\big{)}caligraphic_O ( italic_t start_POSTSUPERSCRIPT - italic_α end_POSTSUPERSCRIPT italic_t start_POSTSUBSCRIPT mix end_POSTSUBSCRIPT ), where α∈(0,12)𝛼012\alpha\in\left(0,\frac{1}{2}\right)italic_α ∈ ( 0 , divide start_ARG 1 end_ARG start_ARG 2 end_ARG ). In the special case of i.i.d. samples, where tmix=0subscript𝑡mix0t_{\text{mix}}=0italic_t start_POSTSUBSCRIPT mix end_POSTSUBSCRIPT = 0, the upper bound for ℰsamp2⁢(t)subscriptsuperscriptℰ2samp𝑡\mathcal{E}^{2}_{\text{samp}}(t)caligraphic_E start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT start_POSTSUBSCRIPT samp end_POSTSUBSCRIPT ( italic_t ) simplifies to the well-studied case analyzed in [14]. Thus, the i.i.d. setting is a special case of our results.
C
4.23⁢e−1±4.03⁢e−1plus-or-minus4.23e14.03e14.23\mathrm{e}{-1}\pm 4.03\mathrm{e}{-1}4.23 roman_e - 1 ± 4.03 roman_e - 1
2.92⁢e−2±2.54⁢e−2plus-or-minus2.92e22.54e22.92\mathrm{e}{-2}\pm 2.54\mathrm{e}{-2}2.92 roman_e - 2 ± 2.54 roman_e - 2
7.06⁢e−1±5.54⁢e−1plus-or-minus7.06e15.54e17.06\mathrm{e}{-1}\pm 5.54\mathrm{e}{-1}7.06 roman_e - 1 ± 5.54 roman_e - 1
2.41⁢e−1±1.54⁢e−1plus-or-minus2.41e11.54e12.41\mathrm{e}{-1}\pm 1.54\mathrm{e}{-1}2.41 roman_e - 1 ± 1.54 roman_e - 1
2.54⁢e−1±2.59⁢e−1plus-or-minus2.54e12.59e12.54\mathrm{e}{-1}\pm 2.59\mathrm{e}{-1}2.54 roman_e - 1 ± 2.59 roman_e - 1
D
\sqrt{\frac{C_{2}\xi_{|V_{k}|}\beta_{t}\Psi_{t|V_{k}|}}{t|V_{k}|}}\right),italic_R start_POSTSUBSCRIPT italic_A italic_B end_POSTSUBSCRIPT ( italic_t ) ≤ divide start_ARG 1 end_ARG start_ARG italic_M end_ARG ∑ start_POSTSUBSCRIPT italic_k = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT | italic_V start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT | ( divide start_ARG italic_C start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT end_ARG start_ARG italic_t | italic_V start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT | end_ARG + square-root start_ARG divide start_ARG italic_C start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT italic_ξ start_POSTSUBSCRIPT | italic_V start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT | end_POSTSUBSCRIPT italic_β start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT roman_Ψ start_POSTSUBSCRIPT italic_t | italic_V start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT | end_POSTSUBSCRIPT end_ARG start_ARG italic_t | italic_V start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT | end_ARG end_ARG ) , concluding our proof.
By picking n𝑛nitalic_n to be the clique cover number of the graph G𝐺Gitalic_G, Theorem 3.1 yields the following corollary.
Suppose k⁢(x,x′)≤1𝑘𝑥superscript𝑥′1k(x,x^{\prime})\leq 1italic_k ( italic_x , italic_x start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ) ≤ 1 for all x,x′𝑥superscript𝑥′x,x^{\prime}italic_x , italic_x start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT. Let θ⁢(G)𝜃𝐺\theta(G)italic_θ ( italic_G ) and ω⁢(G)𝜔𝐺\omega(G)italic_ω ( italic_G ) denote the clique cover number and clique number of the graph G𝐺Gitalic_G respectively. Then, the Bayesian average regret after t𝑡titalic_t timesteps satisfies
The proof of Corollary 3.2 follows from (i) applying Cauchy-Schwarz to bound the term ∑k=1n|Vk|≤n⁢∑k=1n|Vk|=M⁢nsuperscriptsubscript𝑘1𝑛subscript𝑉𝑘𝑛superscriptsubscript𝑘1𝑛subscript𝑉𝑘𝑀𝑛\sum_{k=1}^{n}\sqrt{|V_{k}|}\leq\sqrt{n}\sqrt{\sum_{k=1}^{n}|V_{k}|}=\sqrt{Mn}∑ start_POSTSUBSCRIPT italic_k = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT square-root start_ARG | italic_V start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT | end_ARG ≤ square-root start_ARG italic_n end_ARG square-root start_ARG ∑ start_POSTSUBSCRIPT italic_k = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT | italic_V start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT | end_ARG = square-root start_ARG italic_M italic_n end_ARG, (ii) picking n𝑛nitalic_n to be the clique cover number of G𝐺Gitalic_G, θ⁢(G)𝜃𝐺\theta(G)italic_θ ( italic_G ), and (iii) the fact that for any clique Gk=(Vk,Ek)subscript𝐺𝑘subscript𝑉𝑘subscript𝐸𝑘G_{k}=(V_{k},E_{k})italic_G start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT = ( italic_V start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT , italic_E start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ) in G𝐺Gitalic_G, |Vk|≤ω⁢(G)subscript𝑉𝑘𝜔𝐺|V_{k}|\leq\omega(G)| italic_V start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT | ≤ italic_ω ( italic_G ), since ω⁢(G)𝜔𝐺\omega(G)italic_ω ( italic_G ) denotes the clique number of G𝐺Gitalic_G (i.e. size of the largest clique in G𝐺Gitalic_G).
Picking Gssubscript𝐺𝑠G_{s}italic_G start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT to be the largest complete subgraph of the communication network G𝐺Gitalic_G then yields the following corollary.
A
Our proposed architecture is based on pre-trained Transformer models. Transformer-based neural processes (Müller et al.,, 2021; Nguyen and Grover,, 2022; Chang et al.,, 2024) serve as the foundational structure for our approach, but they have not considered experimental design. Decision Transformers (Chen et al.,, 2021; Zheng et al.,, 2022) can be used for sequentially designing experiments. However, we additionally amortize the predictive distribution, making the learning process more challenging.
In this paper, we proposed an amortized framework for decision-aware Bayesian experimental design (BED).
In this paper, we propose an amortized decision-making-aware BED framework, see Fig. 1(c). We identify two key aspects where previous amortized BED methods fall short when applied to downstream decision-making tasks. First, the training objective of the existing methods does not consider downstream decision tasks. Therefore, we introduce the concept of Decision Utility Gain (DUG) to guide experimental design to better align with the downstream objective. DUG is designed to measure the improvement in the maximum expected utility derived from the new experiment.
In this section, we evaluate our proposed framework on several tasks. Our experimental approach is detailed in Appendix B. In Section F.3, we provide additional ablation studies of TNDP to show the effectiveness of our query head and the non-myopic objective function. The code to reproduce our experiments is available at https://github.com/huangdaolang/amortized-decision-aware-bed.
Results. The results are shown in Fig. 3(b), where we can see that TNDP achieves significantly better average accuracy than other methods. Additionally, we conduct an ablation study of TNDP in Section F.3 to verify the effectiveness of fqsubscript𝑓qf_{\text{q}}italic_f start_POSTSUBSCRIPT q end_POSTSUBSCRIPT. We further analyze the deployment running time to show the advantage of amortization, see Section D.1.
C
3.84±0.91subscript3.84plus-or-minus0.91\mathbf{3.84}_{\pm 0.91}bold_3.84 start_POSTSUBSCRIPT ± 0.91 end_POSTSUBSCRIPT
4.99±1.04subscript4.99plus-or-minus1.044.99_{\pm 1.04}4.99 start_POSTSUBSCRIPT ± 1.04 end_POSTSUBSCRIPT
4.99±1.04subscript4.99plus-or-minus1.044.99_{\pm 1.04}4.99 start_POSTSUBSCRIPT ± 1.04 end_POSTSUBSCRIPT
4.99±1.04subscript4.99plus-or-minus1.044.99_{\pm 1.04}4.99 start_POSTSUBSCRIPT ± 1.04 end_POSTSUBSCRIPT
4.99±1.04subscript4.99plus-or-minus1.044.99_{\pm 1.04}4.99 start_POSTSUBSCRIPT ± 1.04 end_POSTSUBSCRIPT
A
\hat{\mu}_{B^{2},X},Y\rangle_{B}^{2}\odot(X_{i}\ominus\hat{\mu}_{B^{2},X})over^ start_ARG italic_C end_ARG start_POSTSUBSCRIPT italic_B start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT , italic_X end_POSTSUBSCRIPT ( italic_Y ) = divide start_ARG 1 end_ARG start_ARG italic_n end_ARG ⊙ ⨁ start_POSTSUBSCRIPT italic_i = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT ⟨ italic_X start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ⊖ over^ start_ARG italic_μ end_ARG start_POSTSUBSCRIPT italic_B start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT , italic_X end_POSTSUBSCRIPT , italic_Y ⟩ start_POSTSUBSCRIPT italic_B end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ⊙ ( italic_X start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ⊖ over^ start_ARG italic_μ end_ARG start_POSTSUBSCRIPT italic_B start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT , italic_X end_POSTSUBSCRIPT ) is the sample covariance operator in ℬ2⁢(I)superscriptℬ2𝐼\mathcal{B}^{2}(I)caligraphic_B start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ( italic_I ) and λj,j≥1subscript𝜆𝑗𝑗1\lambda_{j},j\geq 1italic_λ start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT , italic_j ≥ 1, are its eigenvalues. As discussed in Hron et al. (2016), this problem can be solved in L2⁢(I)superscript𝐿2𝐼L^{2}(I)italic_L start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ( italic_I ) by using the clr transformation; see also Lemma 2.1.
Despite the attention that the Bayes space methodology for density data analysis has attracted over the last decades, little focus has been paid to robust frameworks necessary for meaningful analysis in the presence of anomalies. Therefore, this paper introduces robust density PCA (RDPCA) as a methodology to robustly estimate the covariance operator and functional principal components of densely sampled univariate density data. Within this newly proposed method, we extend the notion of a functional Mahalanobis distance (Berrendero et al., 2020; Galeano et al., 2015) to the Bayes space. The resulting regularized Mahalanobis distance (RDMD) leads to a robust covariance estimation based on a subsample of those curves that best fit the underlying distribution. Based on this estimate, robust PCs for density function can be determined as summarized in Algorithm 1. During simulations, we demonstrated the efficiency of RDPCA as well as the advantages it has against a non-robust approach in the presence of outliers. Furthermore, if outliers arise within the tails of the densities, the clr transformation can amplify these deviations. Especially in these scenarios, we observed that common robust procedures that do not respect the nature of the data would fail.
However, as previously emphasized, the eigendecomposition (4) and consequently also the optimization problem (3), are sensitive to the presence of outlying curves in the sample. Therefore, in order to achieve a robust estimation of the functional PCs, the underlying covariance will be based on a sub sample consisting of the most central data points. The centrality of each observation will here be quantified by an adaptation of a regularized functional Mahalanobis distance for densities.
Consequently, the structure of the covariance or correlation function can also be significantly influenced in the presence of outlying curves. This is showcased in Figure 9 where robust and non-robust correlations are compared. Between wavelength 100 to 250 and around 400, the outliers exhibit a different structure. Especially at these parts of the domain, the non-robust correlation shows its bias towards the outlying curves.
One of our initial assumptions was that the data had been observed at a dense grid. In the case where the data is only available at a sparse grid, further extensions would require smoothing by an appropriate basis, e.g., CB-splines (Machalová et al., 2021) specifically developed for Bayes spaces. Naturally, the next step from univariate data would be to consider multivariate densities. During multivariate functional data analysis, each observation contains the recording of several “functional" variables. In this setting, not only the covariance between different time points is considered, but also the relation between individual variables. If the second dimension is also continuously observed, the observations are so-called random surfaces. As these statistical fields show increasingly practical relevance (Berrendero et al. (2011); Górecki et al. (2018); Dai and Genton (2018); Masak and Panaretos (2023) and references therein), expanding RDPCA to these areas seems worthwhile. Calculating the RDMD involved regularizing by a suitable operator that smooths out unwanted noise components while keeping the relevant signal within the (uncontaminated) data unaffected. Another class of meaningful operators that utilize the functional nature of the data are differential operators often used in Tikhonov regularization. One has to keep in mind that these operations have yet to be defined for Bayes spaces. Next to PCA, the regularized Mahalanobis distance could be used for concepts like linear or quadratic discriminant analysis for the classification density data. As these methods rely on similarity measures involving several covariance operators, the robust classification of densities based on the RDMD would be suitable.
B
R2superscript𝑅2R^{2}italic_R start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT (Coefficient of Determination), indicating how closely inferred values match the ground truth.
One of the notable contributions is a detailed analysis of reconstruction attacks, wherein a malicious institution can subtract its local counts from the global aggregated counts to infer other institutions’ data. Prior efforts have often acknowledged the feasibility of federated approaches but provided only partial insight into how data overlaps among institutions can amplify privacy threats. In contrast, our work systematically examines how different federation sizes and degrees of patient overlap affect an attacker’s reconstruction accuracy. We find that, in small federations (e.g., 2–3 providers) with large data overlap, an adversary can nearly replicate at-risk and event counts for other sites, posing a serious confidentiality risk. In particular, when only two providers are involved, we demonstrate that an attacker can completely reconstruct the opposing site’s data. Homomorphic encryption proves especially critical under these conditions, as it effectively neutralizes the subtraction-based inference vector.
Few Providers (2–3). Large overlap yields near-perfect accuracy in both datasets when only one other site is present. The attacker’s knowledge heavily overlaps with that single remaining provider, making subtraction-based inference almost exact.
Overlap Impact. Large overlap is devastating for privacy only when the federation is small. With many providers, large overlap ironically confuses the attacker more, driving RMSE upward and reconstruction quality downward.
Lung Cancer. Under no overlap, RMSE remains comparatively low, implying the attacker’s estimates are (ironically) more accurate than in large overlap when many sites are present. Large overlap grows steeply with more providers, showing the attack’s failure on multi-site shared data.
B
We calculated various measures to check the predictive accuracy of FPET. All measures are based on countries with data after 2018.
The second set of measures is focused on the differences between left-out survey data and point predictions. We do not necessarily expect these differences to be small for all survey data: we only expect small differences for survey data that is subject to small sampling and non-sampling errors, such as most DHSs. Results for observations from DHS surveys are shown in Table 3. We see that results are comparable to the results in Table 1.
The first set of measures is focused on the comparison between point estimates and uncertainty intervals (UIs) from the training and full data set. We calculate prediction errors in the FPET point predictions, referring to the difference between the FPET point prediction for a given year and its updated estimate. We also check how often the point prediction falls outside of the UIs based on all data. We expect these errors to be small and for point predictions to fall inside updated intervals at at least the nominal level.
Table 4: Summary of FPET1 prediction errors (in percentage) for the year 2020. An error refers to the difference between the estimate for the indicator based on the full data set and the indicator based on the training data, for the year 2020 (3 year forecast horizon). A positive (negative) error indicates that the prediction based on data up to 2018 underpredicted (overpredicted) the target value.
Table 1: Summary of prediction errors (in percentage) for the year 2020. An error refers to the difference between the estimate for the indicator based on the full data set and the indicator based on the training data, for the year 2020 (3 year forecast horizon). A positive (negative) error indicates that the prediction based on data up to 2018 underpredicted (overpredicted) the target value.
B
Spintronic devices are built using magnetic materials, as the magnetization (magnetic moment per unit volume) of a magnet is a macroscopic manifestation of its correlated electron spins. The prototypical spintronic device, called the magnetic tunnel junction (MTJ), is a three-layer device which can act both as a memory unit and a switch (Moodera et al., 1995). It consists of two ferromagnetic layers separated by a thin, insulating non-magnetic layer. When the magnetization of the two ferromagnetic layers is aligned parallel to each other, the MTJ exhibits a low resistance (RPsubscript𝑅𝑃R_{P}italic_R start_POSTSUBSCRIPT italic_P end_POSTSUBSCRIPT). Conversely, when the two magnetizations are aligned anti-parallel, the MTJ exhibits a high resistance (RA⁢Psubscript𝑅𝐴𝑃R_{AP}italic_R start_POSTSUBSCRIPT italic_A italic_P end_POSTSUBSCRIPT). By virtue of the two discrete resistance states, an MTJ can act as a memory bit as well as a switch. In practice, the MTJs are constructed such that one of the ferromagnetic layers stays fixed, while the other layer’s magnetization can be easily toggled (free layer, FL). Thus, by toggling the FL, using a magnetic field or electric currents, the MTJ can be switched between its ‘0’ and ‘1’ state.
An MTJ can serve as a natural source of randomness upon aggressive scaling, i.e. when the FL of the MTJ is shrunk to such a small volume that it toggles randomly just due to thermal energy in the vicinity. It is worth noting that the s-MTJ can produce a Bernoulli distribution like probability density function (PDF), with p=0.5𝑝0.5p=0.5italic_p = 0.5, without any external stimulus, by virtue of only the ambient temperature. However, applying a bias current across the s-MTJ can allow tuning of the PDF through the spin transfer torque mechanism. As shown in Figure 5c-f of Appendix A, applying a positive bias current across the device makes the high resistance state more favorable, while applying a negative current has the opposite effect. In fact, by applying an appropriate bias current across the s-MTJ, using a simple current-mode digital to analog converter as shown in Figure 6a of Appendix A, we can achieve precise control over the Bernoulli parameter (p𝑝pitalic_p) exhibited by the s-MTJ. The p𝑝pitalic_p-value of the s-MTJ responds to the bias current through a sigmoidal dependence. A more detailed version of this section on the physical principles, device structure and simulations of the s-MTJ device can be found in Appendix A.
Figure 1 depicts our hardware configuration for sampling a single Float16 value. Each disubscript𝑑𝑖d_{i}italic_d start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT is an s-MTJ device. The devices d10,⋯,d14subscript𝑑10⋯subscript𝑑14d_{10},\cdots,d_{14}italic_d start_POSTSUBSCRIPT 10 end_POSTSUBSCRIPT , ⋯ , italic_d start_POSTSUBSCRIPT 14 end_POSTSUBSCRIPT for the exponent are equipped with 4 control bits to adjust the current bias cisubscript𝑐𝑖c_{i}italic_c start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT, which corresponds to the Bernoulli probability. The other devices are set to a fixed current bias equivalent to a Bernoulli of 0.50.50.50.5. The resolution, which determines how accurately we can set the Bernoulli distributions for a device, is dependent on the number of control bits and is visualized in Figure 2. This Figure displays the specific Bernoulli values achievable with 4 control bits. Although additional control bits could allow for more precise settings, we restrict this number to 4 due to physical limitations in setting current biases in hardware with higher resolution while keeping the bias circuit simple (and hence energy-efficient). Our approach focuses on achieving high accuracy around a probability of 1 (cf. configuration in Section 5.2) by taking advantage of the characteristics of the sigmoid function, thus making 4 bits sufficient for achieving the required probability density function.
The number of control bits in an s-MTJ device impacts both energy consumption and the precision of setting the energy bias, which in turn affects the available probabilities of obtaining bit samples. Figure 2 illustrates this relationship. This section evaluates the approximation error caused by imprecision in achieving a desired Bernoulli distribution.
Spintronic devices are built using magnetic materials, as the magnetization (magnetic moment per unit volume) of a magnet is a macroscopic manifestation of its correlated electron spins. The prototypical spintronic device, called the magnetic tunnel junction (MTJ), is a three-layer device which can act both as a memory unit and a switch (Moodera et al., 1995). It consists of two ferromagnetic layers separated by a thin, insulating non-magnetic layer. When the magnetization of the two ferromagnetic layers is aligned parallel to each other, the MTJ exhibits a low resistance (RPsubscript𝑅𝑃R_{P}italic_R start_POSTSUBSCRIPT italic_P end_POSTSUBSCRIPT). Conversely, when the two magnetizations are aligned anti-parallel, the MTJ exhibits a high resistance (RA⁢Psubscript𝑅𝐴𝑃R_{AP}italic_R start_POSTSUBSCRIPT italic_A italic_P end_POSTSUBSCRIPT). By virtue of the two discrete resistance states, an MTJ can act as a memory bit as well as a switch. In practice, the MTJs are constructed such that one of the ferromagnetic layers stays fixed, while the other layer’s magnetization can be easily toggled (free layer, FL). Thus, by toggling the FL, using a magnetic field or electric currents, the MTJ can be switched between its ‘0’ and ‘1’ state.
A
From 2019 one additional point is given to the pilot that occupied a position in the top ten and furthermore has the fastest lap in the race.
From the FIA site, we can retrieve the drivers classification for each GP of the considered championship.
of ties between the ranked elements. Kendall corrected evolutive coefficient can be considered as an extension of a correlation coefficient of two rankings applied to m𝑚mitalic_m rankings and therefore, as output, τ^e⁢v∙superscriptsubscript^𝜏𝑒𝑣∙\widehat{\tau}_{ev}^{\bullet}over^ start_ARG italic_τ end_ARG start_POSTSUBSCRIPT italic_e italic_v end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ∙ end_POSTSUPERSCRIPT gives a real number in [−1,1]11[-1,1][ - 1 , 1 ].
and, again, some rules are applied to break the ties, if any. Our collection of rankings are precisely the rankings of each GP in a season, both for drivers and constructors. We use these series of rankings to compute
FIA has some rules to break ties between the pilots and therefore the ranking of the drivers can be considered as ranking with no ties.
D
=−S˙−c⁢Iabsent˙𝑆𝑐𝐼\displaystyle=-\dot{S}-c\,I= - over˙ start_ARG italic_S end_ARG - italic_c italic_I
If 0<d≪10𝑑much-less-than10<d\ll 10 < italic_d ≪ 1 represents a small proportion of initially infected individuals, the initial conditions of the system are given by (S⁢(0),I⁢(0),R⁢(0))=(1,d,0)𝑆0𝐼0𝑅01𝑑0(S(0),I(0),R(0))=(1,d,0)( italic_S ( 0 ) , italic_I ( 0 ) , italic_R ( 0 ) ) = ( 1 , italic_d , 0 ). From the form of (1), it follows that the system satisfies the conservation law S+I+R=1+d𝑆𝐼𝑅1𝑑S+I+R=1+ditalic_S + italic_I + italic_R = 1 + italic_d. This and elementary manipulations of the first and the last equation in (1) give the following alternative representation of the classical SIR model
The above system may be considerably simplified, by removing last two equations. Indeed, by dividing the last equation by the first one and solving the resulting differential equation under the assumption that [S⁢S]⁢(0)=μ⁢[S]⁢(0)delimited-[]𝑆𝑆0𝜇delimited-[]𝑆0[SS](0)=\mu[S](0)[ italic_S italic_S ] ( 0 ) = italic_μ [ italic_S ] ( 0 ), it follows that [S⁢S]=μ⁢[S]2⁢κdelimited-[]𝑆𝑆𝜇superscriptdelimited-[]𝑆2𝜅[SS]=\mu[S]^{2\kappa}[ italic_S italic_S ] = italic_μ [ italic_S ] start_POSTSUPERSCRIPT 2 italic_κ end_POSTSUPERSCRIPT. Substituting now this formula into the fourth equation and dividing again by the first one we obtain another differential equation for [S⁢I]delimited-[]𝑆𝐼[SI][ italic_S italic_I ] as a function of [S]delimited-[]𝑆[S][ italic_S ]. That equation may be then solved explicitly, depending on the value of κ𝜅\kappaitalic_κ, yielding the reduced version of (4) which by Theorem 1 may be written in terms of the limiting proportions of [S]/ndelimited-[]𝑆𝑛[S]/n[ italic_S ] / italic_n, [I]/ndelimited-[]𝐼𝑛[I]/n[ italic_I ] / italic_n, and [R]/ndelimited-[]𝑅𝑛[R]/n[ italic_R ] / italic_n as n→∞→𝑛n\to\inftyitalic_n → ∞ which we denote below by S,I,R𝑆𝐼𝑅S,I,Ritalic_S , italic_I , italic_R. The final system has the following form:
Note that the first equation in the system above involves only S𝑆Sitalic_S. Since it describes the decline of susceptibles (and consequently the emergence of new cases), it is often referred to as the epidemic curve equation (see, for instance, [7]).
The above system is known as the SIR compartmental ODE model, the simplest example of a deterministic system describing the spread of a disease in a closed population. From the equations, we note the following.
C
For simplicity of notation, assume that tn→t∗→subscript𝑡𝑛superscript𝑡t_{n}\rightarrow t^{*}italic_t start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT → italic_t start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT as n→∞→𝑛n\rightarrow\inftyitalic_n → ∞.
Because ψ^^𝜓\widehat{\psi}over^ start_ARG italic_ψ end_ARG is assumed to be Lipschitz continuous on (0,∞)0(0,\infty)( 0 , ∞ )
Because the pointwise supremum of any collection of continuous functions is lower semi-continuous, we have
In order to apply the dominated convergence theorem to extend the result in Lemma 1 to the continuous index set case, we need to ensure the convergence of the expected value of the supremum.
not only are continuous in the quadratic mean but also almost surely have modification that is sample continuous,
B
While conceptually simple, the computational demands of this grow quickly with the size of the state space. Thus, in the next section, we discuss a method based on Bayesian optimization to allocate any computational budget we may have more efficiently.
We use two main metrics: the entropy of the posterior distribution over reward parameters after a given number of steps of active learning and the expected return (with respect to the initial state distribution and environment dynamics) of an apprentice policy maximizing this expected return (also with respect to the posterior over rewards).
We first collect a fixed initial number of samples for each state. Then, we repeat the following until we have exhausted a budget of trajectories T𝑇Titalic_T. Following standard Gaussian updating, after an observation of a new hypothetical trajectory from s𝑠sitalic_s, we update the parameters
and compute a new EIG estimate for the value s∗superscript𝑠{s}^{*}italic_s start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT maximizing the upper confidence bound:
We propose to use Bayesian optimization [8], in particular the upper confidence bound (UCB) algorithm [9], to adaptively choose from which initial states to sample additional hypothetical trajectories to efficiently estimate the EIG. We still use the basic structure of (2), but instead of using the same number of samples in each initial state, we dynamically choose where to add additional samples to best improve our chance of identifying the state maximizing the EIG.
D
Tucker-decomposition 𝒟=𝒢⋅(𝐀(1),𝐀(2),𝐀(3))𝒟⋅𝒢superscript𝐀1superscript𝐀2superscript𝐀3\mathcal{D}=\mathcal{G}\cdot(\mathbf{A}^{(1)},\mathbf{A}^{(2)},\mathbf{A}^{(3)})caligraphic_D = caligraphic_G ⋅ ( bold_A start_POSTSUPERSCRIPT ( 1 ) end_POSTSUPERSCRIPT , bold_A start_POSTSUPERSCRIPT ( 2 ) end_POSTSUPERSCRIPT , bold_A start_POSTSUPERSCRIPT ( 3 ) end_POSTSUPERSCRIPT ) as:
𝐃(1)::superscript𝐃1absent\displaystyle\mathbf{D}^{(1)}:bold_D start_POSTSUPERSCRIPT ( 1 ) end_POSTSUPERSCRIPT :
The left singular vectors 𝚵(1),𝚵(2)superscript𝚵1superscript𝚵2\mathbf{\Xi}^{(1)},\mathbf{\Xi}^{(2)}bold_Ξ start_POSTSUPERSCRIPT ( 1 ) end_POSTSUPERSCRIPT , bold_Ξ start_POSTSUPERSCRIPT ( 2 ) end_POSTSUPERSCRIPT and 𝚵(3)superscript𝚵3\mathbf{\Xi}^{(3)}bold_Ξ start_POSTSUPERSCRIPT ( 3 ) end_POSTSUPERSCRIPT can be obtained from the SVDs of the matricizations 𝐃(1),𝐃(2)superscript𝐃1superscript𝐃2\mathbf{D}^{(1)},\mathbf{D}^{(2)}bold_D start_POSTSUPERSCRIPT ( 1 ) end_POSTSUPERSCRIPT , bold_D start_POSTSUPERSCRIPT ( 2 ) end_POSTSUPERSCRIPT and 𝐃(3)superscript𝐃3\mathbf{D}^{(3)}bold_D start_POSTSUPERSCRIPT ( 3 ) end_POSTSUPERSCRIPT respectively.
𝐃(3)::superscript𝐃3absent\displaystyle\mathbf{D}^{(3)}:bold_D start_POSTSUPERSCRIPT ( 3 ) end_POSTSUPERSCRIPT :
𝐃(2)::superscript𝐃2absent\displaystyle\mathbf{D}^{(2)}:bold_D start_POSTSUPERSCRIPT ( 2 ) end_POSTSUPERSCRIPT :
A
In Section 2, we formally define important covariates in the compositional setting via the Markov boundary and prove that it remains well-defined under mild conditions despite compositionality. Section 3 details our methods for testing and controlled variable selection and their theoretical guarantees. In Section 4, we assess the validity and power of our proposed methods across diverse data generation scenarios and hyperparameter settings. Finally, Section 5 concludes the paper with contributions, limitations, and ideas for future research.
Item 1 in 2.1 says that, after accounting for the covariates in the Markov boundary, all the remaining covariates provide no further information about Y𝑌Yitalic_Y. Item 2 says that the Markov boundary is the minimal such set, in the sense that no subset of it has the property in item 1. Together, this definition informally says that the Markov boundary is the minimal set of variables that, once known, allows us to drop all other variables without losing information about Y𝑌Yitalic_Y, and removing any variable from this set would lead to a strict loss of information about Y𝑌Yitalic_Y. When the covariates are not compositional, the Markov boundary is (under very mild conditions) equivalent to the set of important covariates defined via conditional dependence (Edwards,, 2012; Candès et al.,, 2018), which, as mentioned in the second-to-last paragraph of Section 1.1, is the basis for covariate importance throughout the literature on parametric and nonparametric methods for identifying important covariates in regression.
Unfortunately, when the covariates are compositional, there will in general be multiple Markov boundaries of Y𝑌Yitalic_Y, so the remaining subsections of this section are devoted to establishing mild conditions under which the above definition provides a well-defined, unique, and nontrivial set of important covariates. But in the remainder of this subsection we will first justify why 2.1 provides a natural and intuitive definition for the set of important covariates. For readability, in the remainder of this subsection, we will suppose we are in the setting established by the end of next subsection, in which there is a unique nontrivial Markov boundary, and hence refer to the Markov boundary instead of a Markov boundary.
Formalizing important covariates under compositionality: To overcome the aforementioned misalignment between hypotheses (conditional or unconditional) and true signals in a parsimonious regression model with compositional covariates, we define the set of important covariates as the minimal set of covariates that together render all other covariates unhelpful for predicting the response, namely, the Markov boundary. We argue that under compositionality of the covariates, the Markov boundary aligns with natural intuition for what covariates should be discovered, and we show that it is well-defined, nontrivial, and unique under mild conditions. Although our conditions are technical, we show they can be simplified in certain compositional cases of interest.
As highlighted in the previous section, the typical ways of defining an important covariate are not well-suited to compositional covariates. In this section, we put forth the Markov boundary as a solution and argue that, unlike conditional or unconditional dependence, membership in the Markov boundary continues to capture the right notion of an important covariate even under compositionality.
D
We use the integrated likelihood (2) throughout and assume that y𝑦yitalic_y and the columns of X𝑋Xitalic_X have
The literature on the connection between Bayesian posterior modes and estimators described as solutions
Table 2: Prior scaling and data augmentation parameterization in the Bayesian elastic net literature. Double horizontal
literature. In this section we review the four combinations of representation and form, provide the corresponding posterior
Bayesian regression models with connections to the elastic net have also received extensive attention in the literature.
A
This update has a step size that takes the steepness into account, as in Adadelta, but also tends to move in the same direction, as in Momentum.
Figure 4 shows how different parameter update method performs within the innermost loop in our alternating Tweedie regression. The three different update methods are Fisher scoring type update with and without learning rate adjustment, and gradient descent Adam update. The loss reductions presented are from the first iteration and first row of data matrix in Algorithm 1. All three lead to quick reduction of the loss within 5 epochs. The updates with or without learning rate adjustment reach stable loss value quickly. On the other hand, the Adam update does not stabilize as fast as the other two methods. Further, note that the with or without learning rate adjustment updates’ loss values are small compared to that of the Adam update. Taking into account the fact that all three updates used same simulated data and started with identical initial values, the difference in the loss values are purely due to different updating schemes. Deriving the first and second derivatives manually and using the Fisher scoring algorithm makes a difference in finding right direction of the update. This point is even more obvious when we look at the overall loss over many iterations of updates in Figure 5. This figure contains the loss of all rows and columns of the data matrix from over 120 iterations. The Adam algorithm decreased the overall loss as the iteration increases but it is not as effective as the other two updating methods. Since the Adam update is inefficient in finding the right optimizing direction in the alternating Tweedie regression algorithm, we exclude it from further discussion and only compare the Fisher scoring type update with or without learning rate adjustment in more detail.
Most of the gradient descent variants do not use the second derivative of the objective function, which is because it is often difficult to compute the second derivative. We believe the use of adaptive learning rate together with the Fisher information matrix in our algorithm provides more benefit than the variants of the gradient descent algorithms. This is because the variants of gradient descent algorithm either ignore the second derivative or uses squared first derivative to approximate the second derivative. On the other hand, the Fisher information provides the variance of the first derivative (i.e., Score function) of the objective function and the inverse of the Fisher information matrix provides the asymptotic variance for the parameters to be estimated according to likelihood inference. The plots in Figure 2 shows how the three updating schemes differ when they are applied to a simulated dataset generated with alternating Tweedie regression model (see section 3). We could see from the left panel which shows the log(loss) for only one row of data with repeating update over 10 epochs, the Adam update is less effective in reducing the loss for each epoch compared to the update with or without learning rate adjustment. When the algorithm continuously updates for over 100 iterations, the log(overall loss) in the right panel also shows that the Adam update is not reducing the loss as fast as the other two updating schemes.
Figures 6 and 7 show more detailed examination of the Fisher scoring update with or without learning rate adjustment. We can see from Figure 6, during the first 20 iterations, the two cases behave similarly in the norm of score vector. However, from 20th to around 130th iteration, the update with no learning rate is more effective in reducing the overall loss than the one with the learning rate. This is not the case for the norm of the score vector. From 20th to around 80th iteration, update with no learning rate is a little less effective in reducing the norm of the score vector than the update with learning rate. However, from 80th to 120th iteration, the case with no learning rate is more aggressive than the one with learning rate. After 120th iteration until convergence, the no learning rate case starts to sacrifice U𝜷~subscript𝑈bold-~𝜷U_{\mbox{\boldmath${\widetilde{\beta}}$}}italic_U start_POSTSUBSCRIPT overbold_~ start_ARG bold_italic_β end_ARG end_POSTSUBSCRIPT severely to improve U𝜷subscript𝑈𝜷U_{\mbox{\boldmath$\beta$}}italic_U start_POSTSUBSCRIPT bold_italic_β end_POSTSUBSCRIPT while maintaining the overall loss in the same neighborhood. On the other hand, small updates with learning rate adjustment help to make the algorithm achieve lower overall loss. Figure 7 shows how the log loss changes in the last 60 iterations. The one with no learning rate fluctuates a little bit around the same value 617109.2 and was trapped there causing the algorithm to believe the convergence criterion is satisfied. The one with learning rate consistently reduces the overall loss and achieves smaller overall loss 617105.8 when the convergence criterion is satisfied. To verify this pattern is not happened by chance, we generated 8 more datasets and applied the algorithm to them. The results are summarized in Figure 8. Basically, the behavior of the algorithm is same as we illustrated above.
We applied the alternating Tweedie regression algorithm to the generated data. For comparison, the gradient descent algorithm with Adam update was applied to the data. For the Adam update, we used the Adam optimizer (torch.optim.Adam) from the PyTorch package. A learning rate scheduler was used to reduce learning rate based on ReduceLROnPlateau from torch.optim.lr_scheduler. The default setting of this function is used, which specifies the loss does not improve in 10 epochs, the learning rate is reduced by factor of 0.1. With the Adam update, computationally we only need to compute the loss and keep track of gradient of loss with respect to parameters being updated by setting .requires_grad_(True) for the parameter. The gradient was computed with the .backward() method of the loss tensor. The parameter update was done with the optimizer’s .step() method. The optimizer’s gradient was reset to zero within each epoch with the .zero_grad() method. For fair comparison, the innermost for loops of epoch are set to be 10 for all three update methods: with and without learning rate adjustment and Adam update.
B
Ray (2022) provide a systematic literature review on approaches and algorithms to mitigate cold-start problems in recommender systems.
Matrix factorization is also used in natural language processing (NLP) in recent years. Word2Vec by Mikolov
Nonnegative matrix factorization (NMF) is particularly useful when dealing with non-negative data, such as in image processing and text mining. Gan
et al. (2013a, 2013b) marks a milestone in NLP history. Although no clear matrices are presented in their study, Word2Vec models the co-occurrence of words and phrases using latent vector representations via a shallow neural network model. Another well-known example of matrix factorization in NLP is the word representation with global vectors (GloVe) by Pennington
Matrix factorization is a fundamental technique in linear algebra and data science, widely used for dimensionality reduction, data compression, and feature extraction. Recent research expands its use in various fields, including recommendation systems (e.g., collaborative filtering), bioinformatics, and signal processing. Researchers are actively pursuing new types of factorizations.
A
The above algorithm basically starts with a set of thresholding values and use cross-validation to obtain the initial best thresholding value which has the smallest cross-validation error. Then around the neighborhood, find an even better one, which has error close to the best but not in the expense of too many variables.
Ten multi-class gene expression data sets for human cancers were investigated in this study and are listed in Table 2. These data sets were kindly provided by the authors of Tan
In this section, we discuss the performance of the three algorithms using the 10 multi-class human cancers data sets. In all that follows, our reported misclassification error refers to the percentage of misclassified test samples. Since random partition of the training data in cross-validation could lead to different estimated thresholding parameter and hence possibly a different test error, we repeated this process 100 times for each dataset.
Again we use deep search algorithm (2.2) for selecting optimal thresholding parameter. Our analysis for this section is also for the 10 multi-class human cancers datasets listed in Table 2. In all that follows, our reported misclassification error refers to the percentage of misclassified test samples.
Héberger (2013)). For each algorithm, the test errors (or number of selected genes) for the ten data sets were ranked. For the same data set, two algorithms may have the same test error. However, they were not ranked together (see Table 3). Due to sample sizes being very different (see Table 2) for different cancer data sets, one misclassified sample contributes different percentages in test error for different cancer data sets.
A
Understanding the nonlabor income effect is just as important as having a reliable estimate of the slope elasticity. First, if we want to predict the effect of tax reforms, say the introduction of a liveable guaranteed income, it would make a large difference whether the nonlabor income effect is zero or say -0.5, which is the estimate obtained in Golosov et al.
Thus, panel data normally used are not well designed to accurately capture the nonlabor income effect. Since sizeable precise estimates of nonlabor income effects are rare, many studies neglect to account for nonlabor income, arguing that it is known that the nonlabor income effect is small. However, this reasoning is at odds with the findings in Golosov et al. (2024). They purposely design a data set that should be able to detect a nonlabor income effect, if one exists.
For λ=1.00⁢E−06𝜆1.00𝐸06\lambda=1.00E-06italic_λ = 1.00 italic_E - 06, the estimated nonlabor income elasticity is 0.0074 with a standard error of 0.0355. This implies a 95% confidence interval of (-0.0622, 0.0770). The estimate is neither significantly different from zero nor from -0.06. Likewise, if we define the nonlabor income elasticity as (dY/dR)(R/Y), and use the fact that Y/R is approximately 5.96 , the confidence interval for the nonlabor income effect would be (-0.371,0.459). This finding, that the income effect is estimated with a large standard error, aligns with many other results for panel data studies of taxable income.
Second, knowing the nonlabor income effect is important when calculating the compensated taxable income elasticity, which is the relevant elasticity when calculating deadweight losses of taxes.
Understanding the nonlabor income effect is just as important as having a reliable estimate of the slope elasticity. First, if we want to predict the effect of tax reforms, say the introduction of a liveable guaranteed income, it would make a large difference whether the nonlabor income effect is zero or say -0.5, which is the estimate obtained in Golosov et al.
C
README.md exists but content is empty.
Downloads last month
5