text
string | source
string |
|---|---|
the corresponding lifted vector field Fθ,M:eV→L(U,eV)as Fθ,M(s):=tens M(s)◦Fθ,M(s)for all s∈eV, (4.2) with tens Mdefined in Definition 3.2. Note that this definition ensures that M-solutions with respect to Fθ,Mwill not exhibit blow-ups at finite time. However, in order to ensure that the M-solutions are solutions in the sense of Theorem 3.9, we consider sufficiently bounded driving signals. The following result shows that we can place a uniform bound on the driving signals to obtain a uniform bound on the solutions and the corresponding Picard iterations. For the remainder of the section, our convention is to index the coordinates of V=Rm+1and U=Rn+1 with [m]0:={0, . . . , m}and[n]0:={0, . . . , n}respectively. Proposition 4.1. For all θ∈Θand M >0, letYθ,Mbe the M-solution, and {Zθ,M(r)}r∈Z≥0be the sequence of M-Picard iterations of the path-dependent RDE dYt=Fθ,M(Yt)dXt. Define Yθ,M(r):=πeV(Zθ,M(r)), and Yθ,M(r):=Yθ,M(r)1+1for all r ∈Z≥0. Let ρ>1and C≥max 1≤i≤⌊p⌋ βi p !p i . There exist functions N,M:R+→R+such that if d p(X,0U)≤ N(∥θ∥), then for all (s,t)∈∆Tand r∈Z≥0, we have Yθ,M(∥θ∥)(r)s,t−Yθ,M(∥θ∥)(r+1)s,t ≤2ρ−rC1 pdp(X,0U) β 1 p !, Yθ,M(∥θ∥)(r)s,t <M(∥θ∥),and (Yθ,M(∥θ∥))s,t <M(∥θ∥). 16 PARDIS SEMNANI, VINCENT GUAN, ELINA ROBEVA, AND DARRICK LEE The functions M,N:R+→R+depend only on p, γ,ρ, C and V. Furthermore, NandMare non-increasing. Proof . The proof is given in Appendix B. □ Throughout the remainder of this paper, we fix ρ>1 and C=max 1≤i≤⌊p⌋ βi p !p i . (4.3) We assume X:S → GΩp(U)is a stochastic process, and we have observed trajectories of the M(∥θ∥)-solutions of the path-dependent SDE dYt=Fθ,M(∥θ∥)(Yt)dXt (4.4) for some (partially) unknown parameter θ∈Θ. In other words, samples of the stochastic process Yθ:=JFθ,M(∥θ∥),M(∥θ∥)◦Xhave been observed; see Remark 3.11. Our goal is to estimate the parameter θ. Moreover, for all θ∈Θ, we let {Zθ(r)}r∈Z≥0be the sequence of the M(∥θ∥)-Picard iterations of (4.4), and define Yθ(r):=πeV(Zθ(r))andYθ(r):=Yθ(r)1+1for all r∈Z≥0. 5. Expected Signature Matching Method In this section, we turn to our main problem of interest: estimating the parameters of path- dependent stochastic differential equations. Our methods are a generalization of [ 35], which studies parameter estimation of path-independent rough differential equations using a moment- matching approach, and we will briefly review their approach. In [35], the authors consider path-independent stochastic differential equations of the form dYt=aθ(Yt)dt+bθ(Yt)◦dWt, (5.1) where Wis an n-dimensional Brownian motion, and aθ:V→Vand bθ:V→L(Rn,V)are path-independent polynomial vector fields parametrized by θ. Suppose we observe Ntrajectories {Yθ0(σi)}N i=1⊂GΩp(V),σi∈ S, sampled from the solutions to the SDE above at an unknown parameter θ0∈Θ. The aim is to compare the theoretical expected signature E[Yθ]of the solution of (5.1) at parameter θ, with the empirical expected signature1 N∑N i=1Yθ0(σi)in order to estimate the parameter θ0∈Θ. While an explicit form of E[Yθ]is difficult to obtain in general, [ 35] uses the expected signature of Picard iterations of (5.1), E[Yθ(r)], as an approximation, and finds that E[Yθ(r)]can be expressed as a polynomial in θdetermined by the expected signature of the driving signal X= (t,W). The aim of this section is to generalize the methodology from [ 35] to estimate the parameters θ∈Θof a signature SDE (4.4), where the vector field is affine in signatures of the solution.
|
https://arxiv.org/abs/2505.22646v1
|
We begin by showing that in this setting the theoretical expectation of the rth Picard iteration can also be expressed as a polynomial Prinθdetermined by the expected signature E[X]of the driving signal; see Remark 5.5. Then, we show that our estimator is consistent in Theorem 5.11. We note that the expected signature of the solution of the SDE is uniquely determined by its distribution. Therefore, estimating the unknown parameters of the SDE by matching the theoretical and empirical expected signatures of the solution, identifies parameters up to their equivalence class under the distribution they induce on the solution. We show in Section 7 that distinct parameter sets may yield the same solution, and hence the same expected signature of the solution, for sufficiently bounded trajectories of the driving signal. PATH-DEPENDENT SDES: SOLUTIONS AND PARAMETER ESTIMATION 17 Remark 5.1. While our vector fields are affine with respect to the signature Yof the solution, the setting in this paper is a generalization of the path-independent polynomial vector fields of [35] by using the shuffle product. For example, we have Y(1,1) 0,t=1 2(Y(1) t)2for all t∈T; see Experiment 6.2. In addition, the polynomial vector fields considered in [ 35] are not Lip (γ)a priori, and therefore, the authors believe a modification of these vector fields, similar to what is suggested in this article, is necessary to guarantee the existence and uniqueness of the solutions. Furthermore, the authors believe there is an error in [ 35, Equation 3.19] in the proof of con- sistency of the estimator in the path-independent setting [ 35, Theorem 3.6]. Their proof may be rectified by replacing their use of the Jacobian of Prwith a matrix of derivatives of Pr, where each row is evaluated at a different point, assuming invertibility everywhere of such matrices, and an even stronger bound on the norm of their inverses. However, we believe our proof of consistency in Theorem 5.11 can be adapted to their setting, which requires significantly fewer assumptions, as discussed in the introduction. 5.1. Picard Iterations as Polynomials of Parameters. In order to approximate the expected signature of the solution to (4.4) with a polynomial expression in the parameters, we begin by studying the Picard iterations for a given X∈GΩ(U). For r∈Z≥0, our aim is to express the path Yθ(r)as a system of polynomials in θ, where the coefficients are given by the signature of X. Notation 5.2. LetA ⊂Z≥0, and Ibe a multi-index orword inA, i.e. I= (i1, . . . , iℓ)∈ Aℓfor some ℓ∈Z≥0. Then we let |I|:=ℓdenote the length of the word. If |I| ≥1, we write I= (I−,If), where I−= (i1, . . . , iℓ−1)is the word consisting of the first ℓ−1 elements in I, and If= (iℓ)is a word of length 1 consisting of the final element. For a word I= (i1, . . . , iℓ)inAand a word J= (j1, . . . , jℓ′)in{1, . . . , ℓ}, we define IJ:= (ij1, . . . , ijℓ′). Note that the word in Awith length 0 is denoted by ∅. For a set
|
https://arxiv.org/abs/2505.22646v1
|
A, we denote the set of words of length at most qinAby W(A,q):={(i1, . . . , iℓ)∈ Aℓ:ℓ≤q}. Example 5.3. I= (4, 0, 1 )is a word of length 3 in {0, 1, 2, 3, 4 }. In this case I−= (4, 0)and If= (1). Moreover, if J= (1, 3), then IJ= (4, 1). Theorem 5.4. For all r ,ℓ∈Z≥0, define Q(r,ℓ):= 0 if r=0orℓ=0 2r−1if r≥1andℓ=1 2r−1if r≥1andℓ >1. (5.2) Ifθ∈ΘandX∈GΩp(U)such that d p(X,0)<N(θ), then for any r ∈Z≥0, any word I ∈ W([m]0,q), and any t ∈[0,T], Yθ(r)I 0,t=∑ J∈W([n]0,Q(r,|I|))αI r,J(θ)XJ 0,t, (5.3) where •for all r ≥0,α∅ r,∅(θ) =1; •for all I ∈([m]0)ℓwith 1≤ℓ≤q, we have αI 0,∅(θ) =0; 18 PARDIS SEMNANI, VINCENT GUAN, ELINA ROBEVA, AND DARRICK LEE •for all r ≥1, all i∈[m]0, and all words J ∈ W([n]0,Q(r, 1)), α(i) r,J(θ) = ∑K:K∈{0,...,m}ℓ, |J−|≤Q(r−1,ℓ), ℓ≤q′θK i,JfαK r−1,J−(θ)if1≤ |J| ≤Q(r−1,q) +1 0 otherwise; (5.4) •and for all r ≥1, all words I ∈([m]0)ℓwith 2≤ℓ≤q, and all words J ∈ W([n]0,Q(r,|I|)), αI r,J(θ) = ∑ (L,K): (1,...,|J|−1)∈L K, |L|≤Q(r−1,|I|−1), |K|≤Q(r,1)−1αI− r−1,JL(θ)αIf r,(JK,Jf)(θ)if1≤ |J| ≤Q(r−1,|I| −1) +Q(r, 1) 0 otherwise(5.5) with θI 0,j:=( 1if j=0and I =∅ 0otherwise for all j ∈[n]0and all I ∈ W ([m]0,q). Moreover, for all r ∈Z≥0and all words I ,J, the function θ7→αI r,J(θ)is a polynomial function of degree at most Q (r,|I|). Proof . We first prove (5.3) for bounded 1-variation paths, where p=1. Note that in this case, the first and second bullet points are immediate. These bullet points prove (5.3) for the cases of r=0 as well as r≥1 and I=∅. We will prove the remaining cases of (5.3) by induction. We assume that for some r∈N, (5.3) holds for all (r′,I′)with r′<rand I′∈ W ([m]0,q). We prove the relations in (5.4) for Picard iteration rand a length 1 word (i). Note that since dp(X,0)≤ N(θ), by Proposition 4.1, we have ∥Yθ(r)s,t∥<M(∥θ∥)for all r≥0 and all (s,t)∈∆T. Thus, without loss of generality, we will omit M(∥θ∥)from the notation for the vector fields, since Fθ(s) =Fθ,M(∥θ∥)(s)when ∥s∥<M(∥θ∥). So, Yθ(r)(i) 0,t=Zt 0n ∑ j=0[Fθ(Yθ(r−1)0,u)]i,jdX(j) 0,u =Zt 0n ∑ j=0⟨θi,j,Yθ(r−1)0,u⟩dX(j) 0,u =Zt 0n ∑ j=0∑ K∈W([m]0,q)θK i,jYθ(r−1)K 0,udX(j) 0,u =n ∑ j=0∑ K∈W([m]0,q)θK i,j∑ ˜J∈W([n]0,Q(r−1,|K|))αK r−1,˜J(θ)Zt 0X˜J 0,udX(j) 0,u =n ∑ j=0∑ K∈W([m]0,q)∑ ˜J∈W([n]0,Q(r−1,|K|))θK i,jαK r−1,˜J(θ)X(˜J,j) 0,t. This proves (5.4), and hence, proves (5.3) for (r,(i)). Now, we consider the relations in (5.5), which will be proved by induction on the length of the word I. Let|I|=ℓwith 2 ≤ℓ≤q. We assume that (5.3) additionally holds at Picard iteration rand for words I′with|I′|< ℓ. Then, Yθ(r)I 0,t=Zt 0Yθ(r−1)I− 0,u·(Fθ(Yθ(r−1)0,u)dX0,u)If PATH-DEPENDENT SDES: SOLUTIONS AND PARAMETER ESTIMATION 19 =Zt 0Yθ(r−1)I− 0,udYθ(r)If 0,u =Zt 0 ∑ L∈W([n]0,Q(r−1,|I|−1))αI− r−1,L(θ)XL 0,u! ∑ K∈W([n]0,Q(r,1)), |K|≥1αIf r,K(θ)dXK 0,u =Zt 0∑ L∈W([n]0,Q(r−1,|I|−1))∑ K∈W([n]0,Q(r,1)), |K|≥1αI− r−1,L(θ)αIf r,K(θ)XL 0,uXK− 0,udXKf 0,u = ∑ L∈W([n]0,Q(r−1,|I|−1))∑ K∈W([n]0,Q(r,1)), |K|≥1∑ ˜J∈L K−αI− r−1,L(θ)αIf r,K(θ)Zt 0X˜J 0,udXKf 0,u = ∑ L∈W([n]0,Q(r−1,|I|−1))∑ K∈W([n]0,Q(r,1)), |K|≥1∑ ˜J∈L K−αI− r−1,L(θ)αIf r,K(θ)X(˜J,Kf) 0,t. This proves (5.5), and hence, proves (5.3) for (r,I). This concludes the proof of (5.3) in the case ofp=1. Now suppose that p>1. Because X∈GΩp(U), there
|
https://arxiv.org/abs/2505.22646v1
|
exists a sequence {X(k)}k∈Nof extensions of bounded 1-variation paths such that dp(X(k),X)→0 as k→∞. For all k∈ N, let{Zθ(k,r)}r∈Z≥0be the sequence of the M(∥θ∥)-Picard iterations of the path-dependent differential equation dYt=Fθ,M(∥θ∥)(Yt)dX(k)t, and define Yθ(k,r):=πeV(Zθ(k,r))and Yθ(k,r):=Yθ(k,r)1+1for all r∈Z≥0. Because dp(X,0)<N(θ), there exists N0∈Nsuch that for all k≥N0, we have dp(X(k),0)<N(θ). Therefore, since (5.3) has already been proved for the case of p=1, we have Yθ(k,r)I 0,t=∑ J∈W([n]0,Q(r,|I|))αI r,J(θ)X(k)J 0,t(5.6) for all k≥N0, all r∈Z≥0, all words I∈ W([m]0,q), and all t∈[0,T]. Because the functions Z 7→R hM(∥θ∥)(Z)dZandZ 7→ πeV(Z)are both continuous on GΩp(U⊕eV), we have that lim k→∞dp(Yθ(k,r),Yθ(r))=0, and so, lim k→∞Yθ(k,r)I 0,t=Yθ(r)I 0,tfor all r∈Z≥0, all I∈ W([m]0,q), and all t∈[0,T]. Hence, by taking the limits of the both sides of (5.6) as k→∞, the proof of (5.3) is concluded for the case of p>1. Note that the four bullet points recursively define αI r,Jfor all r≥0, all words I∈ W([m]0,q), and all words J∈ W ([n]0,Q(r,|I|)). This recursive definition confirms that θ7→αI r,J(θ)is a polynomial of degree at most Q(r,|I|). □ Remark 5.5. Due to the fact that our paths Yθ(r)are the level 1 components of the Picard iterations Yθ(r)of the lifted equation (see Theorem 3.9), they differ from the Picard iterations used in [ 35]. As a result, for any I∈ W ([m]0,q)and any r∈Z≥0in (5.3), we are able to obtain smaller polynomials both in terms of the maximum degrees of the polynomials αI r,Jand the maximum length of words Jappearing on the right-hand side of (5.3). In [ 35], these quantities are |I|qrand |I|∑r−1 i=0qirespectively, where qis the maximum degree of aθand bθin (5.1). In our case, they are both equal to Q(r,|I|) = O(2r), as defined in (5.2). Note that smaller polynomials improve the 20 PARDIS SEMNANI, VINCENT GUAN, ELINA ROBEVA, AND DARRICK LEE overall efficiency of the estimation method, and while Q(r,|I|)grows exponentially with r, we prove in Theorem 5.11 that the rate of convergence of our estimator is also exponential in r. 5.2. Empirical Estimator. In the previous section, we considered the path-dependent differ- ential equation (4.4) with a given θ∈Θfor a fixed sample of the driving rough path X∈GΩp(U) such that dp(X,0)<N(∥θ∥). In particular, we obtained polynomial expressions in θfor the Pi- card iterations, whenever the driving signal is sufficiently bounded. In this section, we consider the case where some components of the true parameter θ0∈Θ, which determines the vector field, are unknown, and the differential equation is driven by stochastic rough paths X. We introduce the Expected Signature Matching Method for estimating the unknown components from the observed trajectories of the solution. First, we will fix a decomposition of the parameter space into known and unknown com- ponents, Θ=Θkn⊕Θunkn, where Θknis the subspace of known parameters, while Θunknis the subspace of unknown parameters. We note that in general, dim Θ=m(n+1)dimeV>dimeV, so the system of polynomials in θ∈Θdefined by the collection of all words I∈ W([m]0,q)would yield an underdetermined system. Thus, we will make the assumption that for the unknown subspace, d:=dimΘunkn≤dimeV. We will consider a vector field parametrized by the true parameter θ0= (θkn
|
https://arxiv.org/abs/2505.22646v1
|
0,θ0)∈Θkn⊕Θunkn. Thus, the known true parameter θkn 0∈Θknwill always be fixed. We will use the notation θ= (θkn 0,θ)∈Θ to denote the full parameter set for some θ∈Θunkn. Now pick ξ∈R+, and define Θξ:={θ∈Θunkn:N(∥θ∥)≥ξ}and Eξ:={σ∈ S :dp(X(σ),0)<ξ}. Note that for every θ∈Θξandσ∈Eξ, we have dp(X(σ),0)<ξ≤ N (∥θ∥), and thus, by Theorem 5.4, for all r∈Z≥0and all words I∈ W([m]0,q), there exists a polynomial function θ7→PI r(θ)such that for all θ∈Θξ, we have PI r(θ) =Eh Yθ(r)I 0,T·χEξi . (5.7) Note that the coefficients of PI rare determined by the expected signature Eh XJ·χEξi for words Jin a (possibly proper) subset of W([n]0,Q(r,|I|)). Suppose we observe Ntrajectories {Yθ0(σi)}N i=1,σi∈ S, of the M(∥θ0∥)-solution to the path- dependent SDE dYt=Fθ0,M(∥θ0∥)(Yt)dXt. Since the dimension of the space of unknown parameters is dim Θ2=d, we choose a set of words {I1, . . . , Id} ⊂ W ([m]0,q). The Expected Signature Matching Method (ESMM) finds an estimate for θ0by solving the polynomial system of equations ( PIkr(θ) =1 NN ∑ i=1Yθ0(σi)Ik 0,T·χEξ(σi),k=1, . . . , d) (5.8) for some r∈N. PATH-DEPENDENT SDES: SOLUTIONS AND PARAMETER ESTIMATION 21 Remark 5.6. To ensure the consistency of the ESMM, we need to pick ξso that θ0∈Θ◦ ξ. To do so, we fix µ>0 such that ∥θ0∥<µ, and set ξ:=N(µ). Note that larger values of ∥θ0∥require larger values of µ, which in turn, lead to smaller values of ξ, thereby shrinking the set Eξ. 5.3. Consistency. This section justifies the use of ESMM by proving its consistency under certain constraints on the function P:Θξ→Rddefined by θ7→(PI1(θ), . . . , PId(θ)) (5.9) where for any word I∈ W([m]0,q), the function PI:Θξ→Ris defined with PI(θ):=Eh (Yθ)I 0,T·χEξi . In particular, we will prove in Theorem 5.11 that when Pis differentiable at θ0with an invert- ible Jacobian, then ESMM is consistent: for sufficiently high Picard iterations r∈Nand with suffiently many samples N, the system (5.8) admits a solution arbitrarily close to θ0almost surely. We begin by showing that the expected signature of the solution is continuous in θ. Lemma 5.7. For any I ∈ W([m],q), the sequence of functions {PI r}r∈Z≥0, defined in (5.7) , converges uniformly on Θξto the function PI, defined in (5.9) , as r→∞. Hence, PIis continuous on Θξ. Proof . The proof is given in Appendix B. □ We will now move on to the main results on consistency. The primary tool we will use in our proof is a variant [ 30] of Miranda’s theorem, which is a generalization of the classical intermediate value theorem. We begin with several required definitions. Definition 5.8. [30, Definition 2.1] For all δ>0, define Q(δ):= [−δ,δ]d⊂Rdand Qk(δ)±:={(x1, . . . , xd)∈Q(δ):xk=±δ} for all k∈[d]. •A set D⊂Rdis called a Miranda domain if there exists a surjective continuous map g:Q(1)→Dsuch that g(∂Q(1)) = ∂D, where ∂denotes the boundary of a set. In this case, gis called a Miranda mapping forD. •For a Miranda domain D, letD={D+ 1,D− 1,· · ·,D+ d,D− d}be a set of subsets of ∂D. The setDis called a Miranda partition of∂Dif there exists a
|
https://arxiv.org/abs/2505.22646v1
|
Miranda mapping g:Q(1)→D such that for all k∈[d], g(Qk(1)+) =D+ kand g(Qk(1)−) =D− k. To simplify terminology, we will often call the pair (D,D)a Miranda domain. •For a Miranda domain (D,D), a continuous mapping f= (f1, . . . , fd):D→Rdis said to satisfy the Miranda conditions on(D,D)if fk(θ1)fk(θ2)≤0 for all θ1∈D+ k, allθ2∈D− k, and all k∈[d]. We will begin by showing consistency when we assume the existence of Miranda domains and partitions such that the corresponding Miranda conditions hold for P−P(θ0)with strict inequalities. In particular, we show that for sufficiently large rand N, the Miranda conditions also hold for the error of the polynomial system in (5.8) on the same Miranda domains. Lemma 5.9. Suppose (D,D)is a Miranda domain such that D ⊂Θξ, and PIk(θ1)−PIk(θ0) PIk(θ2)−PIk(θ0) <0for all θ1∈D+ k,allθ2∈D− k,and all k ∈[d]. (5.10) 22 PARDIS SEMNANI, VINCENT GUAN, ELINA ROBEVA, AND DARRICK LEE Then almost surely, there exist N 0,r0∈Nsuch that for all N ≥N0and all r ≥r0, the polynomial system (5.8) has a solution θr,Nin D. Proof . The proof is given in Appendix B. □ Next, we will show that if Pis differentiable at θ0with an invertible Jacobian, then arbitrary small neighborhoods of θ0contain Miranda domains with the properties described in Lemma 5.9. Lemma 5.10. Suppose P is differentiable at θ0with an invertible Jacobian. Then, there exists ε0>0such that for all ε∈(0,ε0], there exists a Miranda domain (D,D)where D ⊂Bε(θ0)and(5.10) holds. The constant ε0merely depends on the choice of words {I1, . . . , Id}, the expected signature of the driving signal X, and the point θ0. Proof . The proof is given in Appendix B. □ Now, by putting together Lemma 5.9 and Lemma 5.10, in addition to the explicit rates from (B.8), we obtain our main consistency result. Theorem 5.11. Suppose P is differentiable at θ0∈Θξwith an invertible Jacobian at this point. Then almost surely, for all ε>0, there exist N 0,r0∈Nsuch that for all N ≥N0and all r ≥r0, the system (5.8) has a solution θr,N∈Bε(θ0). Furthermore, there exists a constant ε0>0such that for all ε∈(0,ε0], we can explicitly express r 0as the smallest positive integer satisfying ρ−r0<(1−ρ−1)β 1 p ! 16C1/pξ∥(JP(θ0))−1∥∞ε, (5.11) where the constants ρand C are those from (4.3) . The constant ε0merely depends on the choice of words {I1, . . . , Id}, the expected signature of the driving signal X, and the point θ0. Proof . By Lemma 5.10, for all ε>0, there exists a Miranda domain (Dε,Dε)such that Dε⊂Bε(θ0)and (5.10) holds. Then by applying Lemma 5.9, we can conclude the first part of the theorem. To obtain an explicit expression for r0, let ε0be defined as in Lemma 5.10 and consider ε∈(0,ε0]. Moreover, let ηkbe defined as in (B.9) for all k∈[d]. By the discussion in the proof of Lemma 5.9, we require r0∈Nsuch that for all r≥r0, sup θ∈Θξ PIkr(θ)−PIk(θ) <ηkfor all k∈[d]. However, by (B.8), this can occur when sup θ∈Θξ PIkr(θ)−PIk(θ) ≤2ρ−r (1−ρ−1)β 1 p !C1 pξ<ηkfor all k∈[d]. Using the definition of ηkin (B.9) together with (B.11) and (B.12), we
|
https://arxiv.org/abs/2505.22646v1
|
obtain ηk>1 4δfor all k∈[d], where δis defined in (B.10). Therefore, a sufficient condition for r0is 2ρ−r0 (1−ρ−1)β 1 p !C1 pξ<1 4δ=ε 8 (JP(θ0))−1 ∞, and thus we obtain the desired expression in (5.11). □ We now conclude this section by proving that the function Pis in fact, differentiable almost everywhere. PATH-DEPENDENT SDES: SOLUTIONS AND PARAMETER ESTIMATION 23 Proposition 5.12. The function P :Θξ→Rdas defined in (5.9) is locally Lipschitz. Therefore P is differentiable almost everywhere on Θ◦ ξ. Proof . The proof is given in Appendix B. □ 6. Experiments In this section, we evaluate the performance of the Expected Signature Matching Method in estimating the unknown parameters of an underlying path-dependent SDE using a number of its observed trajectories. The code, generated data, and experimental results associated to this section are available at our GitHub repository5. In all the experiments in this section, we set N:=2000, r:=3, and q:=3. To perform each experiment, we first select the parameters m,n,d,δt,T, several d-subsets of the words W([m]0,q), as well as a linear path-dependent SDE dYt=Fθ0(Yt)dXt, (6.1) where θ0= (θkn 0,θ0)∈Θkn⊕Θunkn=Θ=Mat m,n+1(eV)with dim Θunkn=ddenotes the parameters, X= (t,W)is the driving noise, and Wis an n-dimensional Brownian motion. We then repeat the following procedure 100 times: (1) Generate Ntrajectories of the solution to the SDE in (6.1). (2) Apply the ESMM corresponding to each selected set of words to the generated trajecto- ries in order to estimate θ0. To simulate trajectories of (6.1), we consider the lifted version Fθ0of the vector field Fθ0, and then use the Diffrax package [ 22] inPython with step size δtand the Heun Stratonovich solver to simulate solution trajectories of the path-independent SDE equivalent to (6.1), i.e. dYt=Fθ0(Yt)dXt, with Y0=1andFθ0(Yt) =tens(Yt)◦Fθ0(Yt), (6.2) over the time interval [0,T]. The underlying paths in the solution trajectories of the path- independent SDE (6.2) are eV-valued. We take the projection of each of these paths onto Vto extract the underlying paths in the solution trajectories of the path-dependent SDE (6.1), denoted by{Yθ0(σi)}i∈[N]. Then, we use the iisignature package [ 39] inPython to obtain the signatures up to level qof these paths, which we denote by {Yθ0(σi)}i∈[N]. Note that although the solutions to (6.2) involve these signatures, we choose to approximate them from their underlying paths {Yθ0(σi)}i∈[N]as these paths are typically what a user of the ESMM would observe in practice. Now to estimate θ0, we use the Macaulay2 [17] package NumericalAlgebraicGeometry [25, 26 ] to solve the polynomial system ( PIkr(θ) =1 NN ∑ i=1Yθ0(σi)Ik 0,T,k=1, . . . , d) (6.3) for each selected set of words {I1, . . . , Id} ⊂ W ([m]0,q). We report all the real solutions of this polynomial system. Furthermore, we let ˆθ:=arg min {∥θ−θ0∥1:θis a real solution to the polynomial system (6.3). }, (6.4) and report the mean and standard deviation of the obtained estimates ˆθacross all the 100 trials. 5https://github.com/pardis-semnani/signature-SDE-parameter-estimation 24 PARDIS SEMNANI, VINCENT GUAN, ELINA ROBEVA, AND DARRICK LEE Remark 6.1. As we note that lim T→0P(dp(X|∆T,0)<ξ) =1, for small values of T, we consider the simplifying assumption in the polynomial
|
https://arxiv.org/abs/2505.22646v1
|
system (6.3), where we omit the term χEξ(σi) (when compared with (5.8)). This also allows us to use E XJ in the coefficents of PIkr, which is computed using the explicit formula in [ 24, Theorem 1]. Our experiments show that even with this assumption, our method is able to effectively estimate parameters. Experiment 6.2. In this experiment, we set m=n=1, and use step size δt=0.001 over the interval [0,T]with T=0.2. The SDE model in this experiment contains d=3 unknown parameters (θ1,θ2,θ3), and is given by dY(0) t=dt, dY(1) t=−θ1 Y∅ t−Y(1) t dt+ θ2Y∅ t+θ3Y(1,1) t ◦dW(1) t, where we recall that Y∅ t=1. The true parameter is θ0= (−1, 0, 4 ), and Table 1 shows the component-wise means and standard deviations of the estimates ˆθ, defined in (6.4), which are obtained over the 100 trials via the ESMM with the sets of words W1={(0, 1, 0 ),(0, 1, 1 ),(1, 0, 1 )},W2={(1),(1, 1),(0, 1, 1 )}. (6.5) Figure 2 in Appendix C illustrates allthe real solutions to the polynomial system (6.3) in each trial and for each set of words. Our results show that for both sets of words, the ESMM effectively estimates the correct parameters, with slightly more error for the second order signature term in the diffusion. This experiment allows for a comparison between our version of the ESMM and the original version in [ 35]. In [ 35, Example 5.1], parameter estimation is done for the same SDE model and the same true parameter values, where two parameters (θ1,θ3)are considered unknown. Estimates from a single trial using N=2000, r=3,T=0.25 and the word set {(1),(1, 1)}are reported, and they are comparable to our results. θ1θ2θ3 W1mean -1.0135 -0.1117 4.2444 std dev 0.0017 0.0014 0.1820 W2mean -0.9956 0.0413 4.5703 std dev 0.0005 0.0005 0.6102 Table 1. Mean and standard deviation of the estimated components of the unknown pa- rameter (θ1,θ2,θ3) = (−1, 0, 4 )in Experiment 6.2, obtained using ESMM over 100 trials. The first two and last two rows correspond to the sets of words W1andW2in (6.5) re- spectively. Experiment 6.3. In this experiment, we set m=2,n=1,δt=0.01, and T=0.2. The SDE model under consideration involves d=5 unknown parameters (θ1,θ2,θ3,θ4,θ5), and is given by dY(0) t=dt, dY(1) t=θ1 Y(2,1) t−Y(1,2) t dt+ θ2Y∅ t+θ3Y(2) t ◦dW(1) t, dY(2) t=θ4 Y(2,1) t−Y(1,2) t dt+θ5Y∅ t◦dW(1) t. PATH-DEPENDENT SDES: SOLUTIONS AND PARAMETER ESTIMATION 25 The true parameter is θ0= (−1, 5, 1,−2, 3), and for each of the 100 trials, we apply the ESMM to the sets of words W3:={(1),(1, 2),(2, 1),(2, 2),(0, 1, 1 )},W4:={(1, 2),(0, 1, 0 ),(0, 1, 1 ),(0, 2, 1 ),(1, 1, 0 )}. (6.6) Table 2 presents the means and standard deviations of the components of the estimates ˆθobtained across 100 trials, as specified in (6.4). We observe that the ESMM accurately estimates components θ2,θ3,θ5, while the estimates for θ1,θ4were better when the ESMM uses the word set W4. While selecting the optimal word set is beyond the scope of this paper, these empirical results suggest that the choice
|
https://arxiv.org/abs/2505.22646v1
|
of word set influences the performance of the ESMM, and would be an interesting avenue for future work. Figure 3 in Appendix C shows the components of allthe real solutions to the polynomial system (6.3) obtained in each trial and for each set of words. θ1θ2θ3θ4θ5 W3mean 0.3851 4.8549 1.0133 -0.7635 2.9431 std dev 2.7267 0.2567 0.2167 1.5968 0.1668 W4mean -1.2528 5.0165 0.9763 -1.3885 3.0302 std dev 0.8674 0.0748 0.1642 0.2941 0.0795 Table 2. Mean and standard deviation of the estimated components of the unknown pa- rameter (θ1,θ2,θ3,θ4,θ5) = (−1, 5, 1,−2, 3)in Experiment 6.3, obtained using ESMM over 100 trials. The first two and last two rows correspond to the sets of words W3andW4 in (6.6) respectively. Experiment 6.4. In this experiment, we set m=n=3 and simulate solution trajectories with step size δt=0.001 over the interval [0,T]with T=0.1. The following system defines our SDE model, which involves d=6 unknown parameters (θ1,θ2,θ3,θ4,θ5,θ6). dY(0) t=dt, dY(1) t=θ1Y(2) tdt+θ2◦dW(1) t, dY(2) t=θ3Y(1) tdt+θ4◦dW(2) t, dY(3) t=θ5 Y(2,1) t−Y(1,2) t +θ6◦dW(3) t. This model is an example of a path-dependent stochastic causal kinetic model [38, Eq. (11)]. The true parameter in this experiment is θ0= (1, 1,−1, 1,−5, 1), and in each trial we obtain estimates of this parameter by performing the ESMM with respect to the sets of words W5={(0, 3),(1, 1),(1, 2),(2, 1),(2, 2),(3, 3)},W6={(3),(1, 1),(1, 2),(2, 1),(2, 2),(3, 3)}. (6.7) Table 3 indicates the component-wise means and standard deviations of the estimates ˆθobtained over 100 trials, as specified in (6.4). We observe that the ESMM accurately estimates the parameter set for both choices of word sets W5,W6. We note that θ5was more difficult to estimate. Figure 4 in Appendix C shows allthe real solutions to the polynomial system (6.3) across the 100 trials and corresponding to each set of words. Finally, we solve the polynomial system (6.3) with N=200, 000 samples, which are obtained by aggregating the 2000 samples generated in each of the 100 trials. Among the real solutions to 26 PARDIS SEMNANI, VINCENT GUAN, ELINA ROBEVA, AND DARRICK LEE θ1θ2θ3θ4θ5θ6 W5mean 1.0264 1.0007 -1.0082 0.999 -6.415 1.0047 std dev 0.3177 0.0158 0.3109 0.0158 17.0444 0.0162 W6mean 1.0243 1.001 -1.0107 0.9989 -7.6045 1.0047 std dev 0.3186 0.0156 0.3115 0.0159 23.4885 0.0163 Table 3. Mean and standard deviation of the estimated components of the unknown pa- rameter (θ1,θ2,θ3,θ4,θ5,θ6) = ( 1, 1,−1, 1,−5, 1)in Experiment 6.4, obtained using ESMM over 100 trials. The first two and last two rows correspond to the sets of words W5and W6in (6.7) respectively. this polynomial system, the closest to θ0inL1norm is (1.026, 1.001, −1.0079, 0.9993, −6.4232, 1.0049 ) for the word set W5, and (1.026, 1.001, −1.0079, 0.9993, −6.8913, 1.0049 ) for the word set W6. Remark 6.5. For a Brownian motion trajectory W(σ), note that −W(1)(σ)is also the underlying path of another Brownian motion trajectory. Therefore, the laws of the M(∥θ0∥)-solutions of the signatures SDEs dYt=Aθ0(Yt)dt+Bθ0(Yt)◦dWtand dYt=Aθ0(Yt)dt−Bθ0(Yt)◦dWt are the same. This explains why, for each estimate ˆθ, the polynomial system (6.3) also admits a solution whose drift component is close to the drift
|
https://arxiv.org/abs/2505.22646v1
|
component of ˆθ, and whose diffusion com- ponent is close to the negative of the diffusion component of ˆθ; see Figures 2 to 4 in Appendix C. 7. Non-identifiability of Parameters from the Law The characteristic property of the signatures in Theorem 2.17 ensures that the distribution of the solution Yθof a linear signature SDE, restricted to noise terms in Eξ, is uniquely characterized by the restricted expected signature of the solution E[Yθ·χEξ]. Therefore, the Expected Signature Matching Method identifies the unknown parameters of the SDE to the furthest extent possible given the observed distribution of the solution. However, in this section, we show that signature RDEs with distinct parameters can admit the same solution for sufficiently bounded driving signals. Therefore, identifying a unique parameter set giving rise to the observed trajectories of the solution may not be possible. Here, as before, we assume that U∼=Rn+1and V∼=Rm+1, where the extra coordinate for time is included in the zeroth coordinate. Recall that a signature RDE, where the vector field depends on Yt∈eVup to level q, can be written as dYt=Fθ(Yt)dXt, (7.1) for some θ∈Θ, where Fθ:eV→L(U,V)is defined in (4.1). Next, we begin to define two RDEs. Choose q′∈Nsuch that 2 q′≤qand fix a linear functional H∈L(eV,R)which does not depend on the final coordinate; in other words, H(Yt) = PATH-DEPENDENT SDES: SOLUTIONS AND PARAMETER ESTIMATION 27 ⟨λ,Yt⟩, where λ∈eV, and λI̸=0 only if I∈ W([m−1]0,q′). Then fix a parameter ν∈Θsuch that fori∈[m−1]and j∈[n]0,νI i,j=0 if|I|>q′. Now consider the following path-dependent RDE, dY(0) t· · · dY(m−1) tT =F0,...,m−1 ν (Yt)dXt, dY(m) t=∑ I∈W([m−1]0,q′)λIYI− tdYIf t,(7.2) where F0,...,m−1 ν consists of the rows 0, . . . , m−1 of Fν. Thus, the coordinates 0, . . . , m−1 evolve with respect to a linear signature RDE parametrized by ν, while the final coordinate Y(m) tis given in terms of the functional H(Yt). Note that using the shuffle identity, (7.2) is a linear signature RDE and can be written in the form of (7.1) with respect to some θ1∈Θ. To define a second linear signature RDE with distinct parameters which yields the same solutions as (7.2), we introduce “hidden dynamics” to the system. In particular, take κ∈Θsuch that for i∈[m]and j∈[n]0,κI i,j=0 if|I|>q′. We define dY(0) t=dt, dY(1) t· · · dY(m) tT = F1,...,m θ1(Yt) + Y(m) t−H(Yt) F1,...,m κ(Yt) dXt,(7.3) where F1,...,m θ(Yt)excludes the zeroth row of Fθ(Yt). Using the shuffle identity, (7.3) is also a linear signature RDE, i.e. for some θ2∈Θ, it can be written in the form of (7.1). We note that if Yis a solution to (7.2), then Y(m) t=H(Yt)for all t∈[0,T], soYis also a solution to (7.3). Proposition 7.1. For a sufficiently bounded driving signal trajectory Xsatisfying dp(X,0)<min{N(∥θ1∥),N(∥θ2∥)}, (7.4) theM(∥θ1∥)-solution to (7.2) coincides with the M(∥θ2∥)-solution to (7.3) . Therefore, the distributions ofYθ1·χEξandYθ2·χEξare the same if min{N(∥θ1∥),N(∥θ2∥)} ≥ξ. Proof . The proof is given in Appendix B. □ Example 7.2. Define H(Yt):=1 2(Y(1,2) t−Y(2,1) t), and consider the following path-dependent RDEs: dY(0) t=dt, dY(1) t=Y(1) tdt+dW(1) t, dY(2) t=−Y(2) tdt+dW(2)
|
https://arxiv.org/abs/2505.22646v1
|
t, dY(3) t= −Y(1,2) t−Y(2,1) t dt−1 2Y(2) t◦dW(1) t+1 2Y(1) t◦dW(2) t.(7.5) dY(0) t=dt, dY(1) t= Y(1) t+H(Yt)−Y(3) t dt+dW(1) t, dY(2) t= −Y(2) t+H(Yt)−Y(3) t dt+dW(2) t, dY(3) t= −Y(1,2) t−Y(2,1) t+H(Yt)−Y(3) t dt−1 2Y(2) t◦dW(1) t+1 2Y(1) t◦dW(2) t.(7.6) Using the Stratonovich midpoint scheme with step size 0.001, we numerically compute the un- derlying paths in the solutions to these two RDEs corresponding to a sampled trajectory of the 28 PARDIS SEMNANI, VINCENT GUAN, ELINA ROBEVA, AND DARRICK LEE (a)Trajectories observed until T=0.3 (b)Trajectories observed until T=2.0 Figure 1. The underlying paths in the sampled solutions to the path-dependent SDEs in (7.5) and (7.6) are plotted for each component of the paths over the interval [0,T]for (a)T=0.3 and (b) T=2.0. The L2distance between the two trajectories evaluated at Tis (a) 0.045 and (b) 1.743. Brownian motion on the interval [0, 2]. See Figure 1. Note that after a certain point of time, the bound condition in (7.4) on the driving signal is no longer satisfied. As a result, we can no longer guarantee that the solutions will coincide. PATH-DEPENDENT SDES: SOLUTIONS AND PARAMETER ESTIMATION 29 Appendix A. Notation and Conventions Symbol Description Page Fixed Parameters m dimension of solution (excluding time parameter) n dimension of driving noise (excluding time parameter) q truncation level of signature for signature vector field 10 p p -variation of a path 10 r depth of Picard iteration N number of sampled trajectories T length of time interval [0,T] γ constant for Lip (γ)functions 10 M radius of restriction ball ξ parameter used to fix uniform bounds on noise 20 d dimension of the unknown component of the parameter space 20 N,M noise bounded by N(ξ)results in solutions bounded by M(ξ) 15 Sets and Spaces [n],[n]0 finite sets [n]:={1, . . . , n}and[n]0:={0, . . . , n} (S,F,P) probability space 14 W(A,ℓ) words in a set Aof length at most ℓ 17 U,V Banach spaces for driving signal ( U) and solution ( V) 10 T(V),T( (V) ) tensor algebra and its completion 5 H( (V) ) Hilbert space completion of tensor algebra 5 eV shorthand for eV:=T(≤q)(V) 11 Bε(θ) open ball of radius εcentered at θwith respect to ∥.∥∞ GΩp(V) geometric p-rough paths on V 7 GΩtp p(R×V) time parametrized geometric p-rough paths 9 Notation for RDE/SDEs S path signature for a (rough) path 5, 7 0U the trivial rough path 0U:= (1, 0, . . . 0 )inGΩp(U) 7 1 multiplicative identity in eV 11 πU projection map πU:T( (U⊕V) )→T( (U) )for tensor algebras 8 tens, tens M tensor product and its modification tens, tens M:eV→L(V,eV) 11,12 KM M-ball in truncated tensor algebra KM⊂eV 12 GM subset GM⊂GΩp(U)of driving noise such that the M-solution is bounded by M 13 IF,M lifted M-solution map IF,M:GΩp(U)→GΩp(eV)for RDE with vector field F 12 JF,M M-solution map JF,M:GM→GΩp(V)for RDE with vector field F 14 Aθ,Bθ,Fθ parametrized vector field Fθ:eV→L(U,V)with drift Aθand diffusion Bθ 15 Fθ,M lifted vector field Fθ,M:eV→L(U,eV) 15 Notation for Parameter Estimation θ0= (θkn,θ0)true parameter for the vector field, decomposed
|
https://arxiv.org/abs/2505.22646v1
|
into known and unknown components 20 Θkn,Θunknsubspaces consisting of known and unknown components of the parameter space 20 Θξ subset of parameters Θξ⊂Θunknallowable with respect to ξ 20 Eξ subset of samples Eξ⊂ S allowable with respect to ξ 20 χEξindicator function on the set Eξ Pr(θ),P(θ) expectation of Picard iteration/solution as functions in θ 20,21 Q(r,ℓ) maximum degree of polynomial for depth rPicard iteration 17 Jf(θ) Jacobian of fatθ 30 PARDIS SEMNANI, VINCENT GUAN, ELINA ROBEVA, AND DARRICK LEE The following are some conventions for (rough) paths. •Paths are denoted using unbold capital letters Y:[0,T]→V. •Elements/paths valued in the (truncated/completed) tensor algebra are denoted using bold capital letters; for instance a rough path Y∈GΩp(V). •Paths valued in the (truncated/completed) tensor algebra of the tensor algebra are de- noted using calligraphic capital letters; for instance a rough path of signatures Y ∈ GΩp(eV). Appendix B. Proofs of Lemmas and Propositions Proof of Lemma 2.12. For all k=0, . . . , γ0, to obtain h(k)(u)∈L(U⊗k, L(V1,V3))foru∈U, we apply the general Leibniz rule [ 34, p. 318] and get h(k)(u)(eI) =k ∑ i=0∑ J⊂[k] |J|=ig(i)(u)(eIJ)·f(k−i)(u)(eI[k]\J)∈L(V1,V3) (B.1) for basis elements eIofU⊗k, as defined in (2.1). Therefore, for some constant K1>0, ∥h(k)(u)∥ ≤ K1(k+1)2k∥g∥Lip(γ)∥f∥Lip(γ). (B.2) Then, using (B.1) again, for u1,u2∈U, we have h(γ0)(u1)−h(γ0)(u2) (eI) = γ0 ∑ i=0∑ J⊂[γ0],|J|=i g(i)(u1)(eIJ)·f(γ0−i)(u1)(eI[γ0]\J) −g(i)(u2)(eIJ)·f(γ0−i)(u2)(eI[γ0]\J) ≤2γ0γ0 ∑ i=0 g(i)(u1)−g(i)(u2) f(γ0−i)(u1) +2γ0γ0 ∑ i=0 f(γ0−i)(u1)−f(γ0−i)(u2) g(i)(u2) ≤2·2γ0∥f∥Lip(γ)∥g∥Lip(γ) γ021−(γ−γ0)+1 ∥u1−u2∥γ−γ0, where in the last line we use the fact that for all i=0, . . . , γ0−1, ∥f(i)(u1)−f(i)(u2)∥=∥f(i)(u1)−f(i)(u2)∥1−(γ−γ0)· ∥f(i)(u1)−f(i)(u2)∥γ−γ0 ≤ 2∥f∥Lip(γ)1−(γ−γ0) ∥f∥Lip(γ)∥u1−u2∥γ−γ0. Therefore, for some constant K2>0, we have h(γ0)(u1)−h(γ0)(u2) ≤K2∥f∥Lip(γ)∥g∥Lip(γ)∥u1−u2∥γ−γ0. (B.3) Equations (B.2) and (B.3) conclude the proof. □ Proof of Proposition 3.8. First assume Xis a bounded variation path, i.e. p=1. Let Z ∈ GΩ1(U⊕eV)be as in Definition 3.4. By definition of the coupled M-solution, for all (s,t)∈∆T, we have Z1 s,t=Zt shM(Z1 0,u)dZ1 0,u. (B.4) PATH-DEPENDENT SDES: SOLUTIONS AND PARAMETER ESTIMATION 31 Therefore, Y(∅) s,t=Z(∅) s,t=Zt s0=0. Now we prove the first equality in (3.6) by induction on k. Ifk=1, this equality trivially holds. Assume it is also true for all k∈[ℓ]for some 1 ≤ℓ < q. We will prove it is then true when k=ℓ+1. For Y=πeV(Z)∈GΩ1(eV), we have Y((i1,...,iℓ+1)) s,t =Z((i1,...,iℓ+1)) s,t (Since πeV(Z) =Y) =Zt sZ((i1,...,iℓ)) 0,udZ((iℓ+1)) 0,u((B.4), and ∥π˜V(Z)1 0,u+1∥<M∀u∈[s,t]) =Zt sY((i1,...,iℓ)) 0,udY((iℓ+1)) 0,u(Since πeV(Z) =Y) =Zt sY((i1),...,(iℓ)) 0,udY((iℓ+1)) 0,u(By the induction hypothesis) =Y((i1),...,(iℓ+1)) s,t (Since Yis a bounded variation path). This concludes the proof of the first equality in (3.6) in the bounded 1-variation case. Now let X∈GΩp(U). Then there exists a sequence {X(n)}n∈N⊂GΩp(U)of extensions of bounded variation paths such that dp(X(n),X)→0 as n→∞. Therefore, we get that dp(Y(n),Y)→0 as n→∞, where Y(n):=IF,M(X(n)). Since for all t∈[0,T], we have ∥Y0,t∥=∥Y1 0,t+1∥<M, there exists N>0 such that for all n≥N, we have ∥Y(n)1 0,t+1∥<M for all t∈[0,T]. Hence, for all n≥N, Y(n)(∅) s,t=0, Y(n)((i1),...,(ik)) s,t =Y(n)((i1,...,ik)) s,t . Taking the limit as n→∞proves Y∅ s,t=1+Y(∅) s,t=1, Y(i1,...,ik) s,t =Y((i1,...,ik)) s,t =Y((i1),...,(ik)) s,t (B.5) for arbitrary X∈GΩp(V), where we use the continuity of the extension in [ 27, Theorem 3.10] when ⌊p⌋<k≤q.
|
https://arxiv.org/abs/2505.22646v1
|
Equation (B.5) proves that Y=πV(Y)as desired. □ Proof of Proposition 4.1. Our first step is to define a control ωfor each X∈GΩp(U)which we will later bound. Recall that for s<t, we define ∆s,t:={(u1,u2):s≤u1<u2≤t}. We claim that ω:∆T→Rdefined by ω(s,t) =C d p(X|∆s,t,0)p is a control for the p-variation of X∈GΩp(U). Indeed, let 0 ≤s≤u≤t≤T. Then for any i∈[⌊p⌋], any partition s=a0<a1<· · ·<aℓ=u, and any partition u=aℓ<aℓ+1<· · ·< aℓ+ℓ′=t, we have ℓ ∑ k=1 Xi ak−1,ak p i+ℓ′ ∑ k=1 Xi aℓ+k−1,aℓ+k p i≤max 1≤i≤⌊p⌋sup s=b0<b1<···<bℓ′′=t, ℓ′′∈Nℓ′′ ∑ k=1 Xi bk−1,bk p i=dp(X|∆s,t,0)p. Therefore, ωis super-additive, i.e. dp(X|∆s,u,0)p+dp(X|∆u,t,0)p≤dp(X|∆s,t,0)p. 32 PARDIS SEMNANI, VINCENT GUAN, ELINA ROBEVA, AND DARRICK LEE Moreover, for all (s,t)∈∆T, we have Xi s,t ≤ dp(X|∆s,t,0)pi p≤ω(s,t)i p β i p !. This concludes the proof that ωis a control for the p-variation of X. Next, our aim is to determine the conditions under which the bounds on the Picard iterations in part (4) of Theorem 3.9 hold for the entire interval [0,T]. In order to do so, we must adapt intermediate steps of the proof of the Universal Limit Theorem in [ 27, Theorem 5.3]. In particular, in [27, Section 5.5], we must consider for i=0, 1, 2 three paths Zi∈GΩp(Bi)on Banach spaces Bi and associated vector fields Hi θ,M:Bi→L(Bi,Bi). In our setting, the vector fields are determined byhθ,Mfrom (3.5), where the θdenotes the additional dependence on θof our parametrized vector fields in (4.2). In order to obtain Tρin part (4) of Theorem 3.9, one must consider the p-variation of Z Hi θ,M(Zi)dZi. By [27, Theorem 4.12], there exists a function K:R≥0×R≥0×R≥0→R≥0such that for i=0, 1, 2 and all Zi∈GΩp(Bi)with p-variation controlled by some control bωandbω(0,T)≤1, the p- variation ofR HM i(Zi)dZiis controlled by K H0 θ,M Lip, H1 θ,M Lip, H2 θ,M Lip bω. By the proof of [ 27, Theorem 4.12], Kcan be considered to be continuous and non-decreasing in all of the variables. Note that there exists a function I:R≥0×R≥0→R≥0such that Iis continuous and non-decreasing in both variables, and ∥hθ,M∥Lip≤I(M,∥θ∥). Then, since the norms of the vector fields Hi θ,Mare bounded by continuous and non-decreasing functions of ∥hθ,M∥Lip, we can define a continuous and non-decreasing (in both variables) func- tion J:R≥0×R≥0→R≥0such that max H0 θ,M Lip, H1 θ,M Lip, H2 θ,M Lip ≤J(M,∥θ∥). Therefore, for i=0, 1, 2, the p-variation ofR Hi M,θ(Z)dZis also controlled by bK(M,∥θ∥)bωwhere bK(M,∥θ∥):=K(J(M,∥θ∥),J(M,∥θ∥),J(M,∥θ∥)). Now by the proof of the Universal Limit Theorem in [ 27, Page 89], if ω(0,T)≤1 bK(M,∥θ∥)⌊p⌋, (B.6) then part (4) of Theorem 2.15 holds for Tρ=T. In our setting, where we consider the level 1 component as in part (4) of Theorem 3.9, we have in particular for all (s,t)∈∆Tand all r∈Z≥0, ∥Yθ,M(r)s,t−Yθ,M(r+1)s,t∥≤2ρ−rω(0,T)1 p β 1 p !≤2ρ−rC1/pdp(X,0) β 1 p !. PATH-DEPENDENT SDES: SOLUTIONS AND PARAMETER ESTIMATION 33 Then, this implies that ∥Yθ,M(r)s,t∥≤∥Yθ,M(0)s,t∥+2C1/pdp(X,0) β 1 p !r−1 ∑ i=0ρ−i<1+2C1/pdp(X,0) (1−ρ−1)β 1 p !. (B.7) Now by restricting to sufficiently bounded X, we wish to obtain uniform bounds on the values ∥Yθ,M(r)s,t∥which are valid for all (s,t)∈∆T. To do so, we define the function F:(N,∥θ∥)7→CNpbK 1+4C1
|
https://arxiv.org/abs/2505.22646v1
|
p (1−ρ−1)β 1 p !N,∥θ∥ ⌊p⌋ . For all θ∈Θ, since F(0,∥θ∥) = 0, there exists N>0 with F(N,∥θ∥)≤1 by continuity. Fix N0>0 and set N(µ):=sup{N∈(0,N0]:F(N,µ)≤1}andM(µ):=1+4C1 p (1−ρ−1)β 1 p !N(µ). Now, consider Xsuch that dp(X,0)≤ N(∥θ∥). Then ω(0,T)bK(M(∥θ∥),∥θ∥)⌊p⌋≤ F(N(∥θ∥),∥θ∥)≤1. In particular, the condition in (B.6) holds. Then in this setting, by (B.7), for all (s,t)∈∆T, Yθ,M(∥θ∥)(r)s,t ≤1+2C1/pdp(X,0) (1−ρ−1)β 1 p !<M(∥θ∥). Furthermore, by part (3) of Theorem 3.9, this implies that Yθ,M(∥θ∥) s,t ≤1+2C1/pdp(X,0) (1−ρ−1)β 1 p !<M(∥θ∥). To finish the proof, it only remains to show that NandMare non-increasing. Let µ1,µ2∈R+ with µ1≤µ2. Then F(N(µ2),µ1)≤ F(N(µ2),µ2)≤1, which means that N(µ1)≥ N(µ2), and thus,M(µ1)≥ M (µ2)as desired. □ Proof of Lemma 5.7. By Proposition 4.1, for all θ∈Θunknandσ∈ S , ifdp(X(σ),0)≤ N(∥θ∥), we have Yθ(r)(σ)I 0,T−Yθ(r+1)(σ)I 0,T ≤2ρ−rC1 pdp(X(σ),0) β 1 p !for all r∈Z≥0, where constants ρand Care as in (4.3). So, by Theorem 3.9, Yθ(r)(σ)I 0,T−Yθ(σ)I 0,T ≤2ρ−r (1−ρ−1)β 1 p !C1 pdp(X(σ),0)for all r∈Z≥0. For all θ∈Θξandσ∈Eξ, we have dp(X(σ),0)<ξ≤ N(∥θ∥). Hence, for all θ∈Θξandσ∈ S, Yθ(r)(σ)I 0,T·χEξ(σ)−Yθ(σ)I 0,T·χEξ(σ) ≤2ρ−r (1−ρ−1)β 1 p !C1 pξfor all r∈Z≥0. 34 PARDIS SEMNANI, VINCENT GUAN, ELINA ROBEVA, AND DARRICK LEE Now taking expectations, we get that for all θ∈Θξand r∈Z≥0, PI r(θ)−PI(θ) ≤E Yθ(r)I 0,T·χEξ−(Yθ)I 0,T·χEξ ≤2ρ−r (1−ρ−1)β 1 p !C1 pξ. (B.8) Thus{PI r}r∈Z≥0converges uniformly on ΘξtoPIasr→∞. But by Theorem 5.4, the functions PI rare polynomials, and therefore, continuous on Θξ. So PIis continuous as well. □ Proof of Lemma 5.9. Without loss of generality, we can assume for all k∈[d], PIk(θ1)−PIk(θ0)>0 and PIk(θ2)−PIk(θ0)<0 for all θ1∈D+ kandθ2∈D− k. Note that all sets in Dare compact by definition. For all k∈[d], let ηk:=min( min θ1∈D+ kPIk(θ1)−PIk(θ0) 2, min θ2∈D− kPIk(θ0)−PIk(θ2) 2) >0. (B.9) By Lemma 5.7, there exists r0∈Nsuch that for all r≥r0and all k∈[d], sup θ∈Θξ PIkr(θ)−PIk(θ) <ηk. So, for all θ1∈D+ kand all θ2∈D− k, we have PIkr(θ1)−PIk(θ0)>PIk(θ1)−PIk(θ0)−ηk≥min θ1∈D+ k PIk(θ1)−PIk(θ0) −ηk≥ηk, PIk(θ0)−PIkr(θ2)>PIk(θ0)−PIk(θ2)−ηk≥min θ2∈D− k PIk(θ0)−PIk(θ2) −ηk≥ηk. On the other hand, by the Strong Law of Large Numbers, almost surely, there exists N0∈Nsuch that for all N≥N0and all k∈[d], we have 1 NN ∑ i=1Yθ0(σi)Ik 0,T·χEξ(σi)−PIk(θ0) <ηk. So, for all r≥r0, all N≥N0, all k∈[d], allθ1∈D+ k, and all θ2∈D− k, PIkr(θ2)<PIk(θ0)−ηk<1 NN ∑ i=1Yθ0(σi)Ik 0,T·χEξ(σi)<PIk(θ0) +ηk<PIkr(θ1). This means the continuous mapping Pr,N:D→Rddefined by Pr,N(θ):= PI1r(θ)−1 NN ∑ i=1Yθ0(σi)I1 0,T·χEξ(σi), . . . , PIdr(θ)−1 NN ∑ i=1Yθ0(σi)Id 0,T·χEξ(σi)! satisfies the Miranda conditions on (D,D). Therefore, by [ 30, Theorem 2.7], there exists θr,N∈D such that Pr,N(θr,N) =0, i.e. the system (5.8) has a solution in D. □ Proof of Lemma 5.10. This proof closely follows the idea of the proof of [ 30, Theorem 3.1]. Define g:Rd→Rd,θ7→(JP(θ0))−1θ+θ0. LeteΘξ:=g−1(Θξ), and define h= (h1, . . . , hd):eΘξg− →ΘξP− →Rd. PATH-DEPENDENT SDES: SOLUTIONS AND PARAMETER ESTIMATION 35 Moreover, define R= (R1, . . . , Rd):eΘξ→Rd,R(θ) =h(θ)−P(θ0)−θ Note that gis differentiable, g(0) =θ0, and Pis differentiable at θ0. Thus, by the chain rule, his differentiable at 0 and we get Jh(0) =JP(g(0))Jg(0) =JP(θ0)(JP(θ0))−1=Idd, where Id ddenotes the d×didentity matrix. This means that we have R(θ) ∥θ∥∞=h(θ)−h(0)−Jh(0)(θ−0) ∥θ∥∞and
|
https://arxiv.org/abs/2505.22646v1
|
lim θ→0R(θ) ∥θ∥∞=0 since his differentiable at 0. Thus, there exists ε0>0 such that Bε0(θ0)⊂Θξ, and for all θ= (θ1, . . . ,θd)∈eΘξwith∥θ∥∞≤ε0 2∥(JP(θ0))−1∥∞, we have −1 2<Rk(θ) ∥θ∥∞<1 2for all k∈[d]. This implies that −1 2∥θ∥∞+θk<hk(θ)−PIk(θ0)<1 2∥θ∥∞+θkfor all k∈[d]. Now let ε∈(0,ε0]. Set δ:=ε 2 (JP(θ0))−1 ∞. (B.10) Moreover, set D:=g(Q(δ)),D+ k:=g(Qk(δ)+), and D− k:=g(Qk(δ)−)for all k∈[d]. Then D is a Miranda domain and D:={D± 1, . . . , D± d}is a Miranda partition of ∂D. Note that for all θ∈Q(δ), we have ∥g(θ)−θ0∥∞= (JP(θ0))−1θ ∞≤ (JP(θ0))−1 ∞δ<ε. So,D⊂Bε(θ0). On the other hand, for all k∈[d]and all θ1∈D+ k, we have θ1=g(θ′ 1)for some θ′ 1∈Qk(δ)+. So, PIk(θ1)−PIk(θ0) =hk(θ′ 1)−PIk(θ0)>−1 2∥θ′ 1∥∞+ (θ′ 1)k=−1 2δ+δ=1 2δ. (B.11) Similarly, for all θ2∈D− k, we have θ2=g(θ′ 2)for some θ′ 2∈Ik(δ)−. So, PIk(θ0)−PIk(θ2) =PIk(θ0)−hk(θ′ 2)>−1 2∥θ′ 2∥∞−(θ′ 2)k=−1 2δ+δ=1 2δ. (B.12) This concludes the proof. □ Proof of Proposition 5.12. Let ˆθ∈Θξ. Consider the open neighborhood U:={θ∈Θξ: 2∥ˆθ∥>∥θ∥>1 2∥ˆθ∥}ofˆθ. Recall that P(θ) = ( PI1(θ), . . . , PId(θ)), where PI(θ) =E[(Yθ)I 0,T·χEξ]. Our aim is to show that PIis path-wise Lipschitz on Uwith respect to θby applying the locally Lipschitz property of RDE solutions in [ 15, Theorem 10.26]. In particular, consider a driving signal XinEξ, which is controlled by ω. By the proof of Proposition 4.1, we can assume ω(0,T) =Cdp(X,0)p<Cξp. Letθ1,θ2∈U. Then for i=1, 2, we have M(∥θi∥)≤ M (1 2∥ˆθ∥) =:M, since by Proposition 4.1, the function Mis non-increasing. Therefore, Yθ1andYθ2are the underlying paths in the solutions to the path-independent RDEs dYθi,t=Fθi,M(Yθi,t)dXtYθi,0=1, 36 PARDIS SEMNANI, VINCENT GUAN, ELINA ROBEVA, AND DARRICK LEE fori=1, 2 respectively, where Fθi,M(s) =tens M(s)◦Fθi,M(s)for all s∈eV. The vector field Fθi,Mis Lip (γ)for any γ>0. So, without loss of generality, assume γ> max{p, 3}. Suppose v(θ1,θ2)>∥Fθ1,M∥Lip(γ),∥Fθ2,M∥Lip(γ). Then, by [ 15, Theorem 10.26], we have ∥(Yθ1)0,T−(Yθ2)0,T∥≤C′∥Fθ1,M−Fθ2,M∥Lip(γ−1)ω(0,T)1/pexp(C′v(θ1,θ2)pω(0,T)) ≤C′∥Fθ1,M−Fθ2,M∥Lip(γ−1)C1 pξexp(C′v(θ1,θ2)pCξp), (B.13) where C′>0 is a constant which depends only on pandγ. Thus, it remains to show that ∥Fθ,M∥Lip(γ)can be uniformly bounded over θ∈Uand∥Fθ1,M−Fθ2,M∥Lip(γ−1)can be bounded by a multiple of ∥θ1−θ2∥onU. For all s∈eV, we have Fθ1,M(s)−Fθ2,M(s) =tens M(s)◦(Fθ1,M(s)−Fθ2,M(s)). Now by Lemma 2.12, ∥Fθ1,M−Fθ2,M∥Lip(γ−1)≤K∥tens M∥Lip(γ−1)∥Fθ1,M−Fθ2,M∥Lip(γ−1)for some constant K>0 depending only on γ,m, and q. Therefore, it suffices to show that ∥Fθ1,M− Fθ2,M∥Lip(γ−1)can be bounded by some multiple of ∥θ1−θ2∥. Since the extension operator in Theorem 3.1 is linear [ 42, p. 176], we can consider Fθ1,M−Fθ2,Mas the extension of the restriction (Fθ1−Fθ2)|KMtoKM:={s∈eV:∥s∥ ≤ M}. So, by Theorem 3.1, we only need to bound ∥(Fθ1−Fθ2)|KM∥Lip(γ−1). Note that Fθ1,θ2:=Fθ1−Fθ2∈L(eV, L(U,eV))is a linear function, and for all s,s1,s2∈KM, ∥Fθ1,θ2(s)∥≤M∥θ1−θ2∥and ∥Fθ1,θ2(s1)−Fθ1,θ2(s2)∥≤ ∥θ1−θ2∥∥s1−s2∥. Moreover, the 2nd and higher derivatives of Fθ1,θ2are identically 0. Therefore, as desired, ∥(Fθ1−Fθ2)|KM∥Lip(γ−1)≤max{1,M} · ∥θ1−θ2∥. By similar arguments as above, we have a uniform bound for ∥Fθ,M∥Lip(γ)over U. Therefore, by applying these bounds to (B.13), Pmust be locally Lipschitz, and by Rademacher’s theorem, Pmust be differentiable almost everywhere. □ Proof of Proposition 7.1. Let Ybe the M(∥θ1∥)-solution, i.e. the solution (in the sense of Theorem 3.9), to (7.2). Then for all t∈[0,T]we have Y(m) t=Zt 0∑ I∈W([m−1]0,q′)λIYI− sdYIf s=⟨λ,Yt⟩=H(Yt), and therefore, Ywill also be the M(∥θ1∥)-solution, and
|
https://arxiv.org/abs/2505.22646v1
|
thus, the solution to (7.3). But given the bound on Xand by Proposition 4.1, the M(∥θ2∥)-solution and the solution to (7.3) are the same. So,Yis also the M(∥θ2∥)-solution to (7.3). □ PATH-DEPENDENT SDES: SOLUTIONS AND PARAMETER ESTIMATION 37 Appendix C. Figures for Section 6 This section contains the figures related to Section 6. θ1θ2θ3 Trial numberEstimated values -17-15-13-11-9-7-5-3-11357911131517 0 25 50 75 100 (a) Trial numberEstimated values -7-6-5-4-3-2-101234567 0 25 50 75 100 (b) Figure 2. The components of all the real solutions to the polynomial system (6.3) over 100 trials in Experiment 6.2 corresponding to the sets of words (a) W1and (b) W2, defined in (6.5). 38 PARDIS SEMNANI, VINCENT GUAN, ELINA ROBEVA, AND DARRICK LEE θ1θ2θ3θ4θ5 Trial numberEstimated values -6-5-4-3-2-10123456 0 25 50 75 100 (a) Trial numberEstimated values -6-5-4-3-2-10123456 0 25 50 75 100 (b) Figure 3. The components of all the real solutions to the polynomial system (6.3) over 100 trials in Experiment 6.3 corresponding to the sets of words (a) W1and (b) W2, defined in (6.6). PATH-DEPENDENT SDES: SOLUTIONS AND PARAMETER ESTIMATION 39 θ1θ2θ3θ4θ5θ6 Trial numberEstimated values -2-1012 0 25 50 75 100 (a) Trial numberEstimated values -2-1012 0 25 50 75 100 (b) Figure 4. The components of all the real solutions to the polynomial system (6.3) over 100 trials in Experiment 6.4 corresponding to the sets of words (a) W5and (b) W6, defined in (6.7). Only a few of the estimated values for θ5fall within the interval [−2, 2], and thus, the rest are not visible in these plots. In each trial, many of the estimated values for θ2, θ4, and θ6nearly coincide. 40 PARDIS SEMNANI, VINCENT GUAN, ELINA ROBEVA, AND DARRICK LEE References [1] Jerry J Batzel and Hien T Tran. Stability of the human respiratory control system I. analysis of a two-dimensional delay state-space model. Journal of mathematical biology , 41:45–79, 2000. [2] Alexandros Beskos, Omiros Papaspiliopoulos, Gareth O Roberts, and Paul Fearnhead. Exact and computationally efficient likelihood-based estimation for discretely observed diffusion processes (with discussion). Journal of the Royal Statistical Society Series B: Statistical Methodology , 68(3):333–382, 2006. [3] Jaya PN Bishwal. Parameter estimation in stochastic differential equations . Springer, 2007. [4] Horatio Boedihardjo, Xi Geng, Terry Lyons, and Danyu Yang. The signature of a rough path: Uniqueness. Adv. Math. , 293:720–737, April 2016. [5] Luc Brogat-Motte, Riccardo Bonalli, and Alessandro Rudi. Learning controlled stochastic differential equations. arXiv preprint arXiv:2411.01982 , 2024. [6] Evelyn Buckwar. Euler-Maruyama and Milstein approximations for stochastic functional differential equations with dis- tributed memory term . Humboldt-Universität zu Berlin, Wirtschaftswissenschaftliche Fakultät, 2005. [7] Thomas Cass, Bruce K. Driver, Nengli Lim, and Christian Litterer. On the integration of weakly geometric rough paths. Journal of the Mathematical Society of Japan , 68(4):1505–1524, 2016. [8] Kuo-Tsai Chen. Integration of paths – a faithful representation of paths by noncommutative formal power series. Trans. Amer. Math. Soc. , 89(2):395–407, 1958. [9] Ilya Chevyrev and Terry Lyons. Characteristic functions of measures on geometric rough paths. The Annals of Probability , 44(6):4049–4082, 2016. [10] Ilya Chevyrev and Harald Oberhauser. Signature moments to characterize laws of
|
https://arxiv.org/abs/2505.22646v1
|
stochastic processes. J. Mach. Learn. Res. , 23(176):1–42, 2022. [11] Christa Cuchiero, Philipp Schmocker, and Josef Teichmann. Global universal approximation of functional input maps on weighted spaces. arXiv preprint arXiv:2306.03303 , 2023. [12] Christa Cuchiero, Sara Svaluto-Ferro, and Josef Teichmann. Signature SDEs from an affine and polynomial per- spective. arXiv preprint arXiv:2302.01362 , 2023. [13] Thomas Erneux. Applied delay differential equations . Springer, 2009. [14] Peter K. Friz and Martin Hairer. A Course on Rough Paths: With an Introduction to Regularity Structures . Universitext. Springer International Publishing, 2 edition, 2020. [15] Peter K. Friz and Nicolas B. Victoir. Multidimensional Stochastic Processes as Rough Paths: Theory and Applications . Cambridge Studies in Advanced Mathematics. Cambridge University Press, 2010. [16] Robin Giles. A generalization of the strict topology. Trans. Amer. Math. Soc. , 161:467–474, 1971. [17] Daniel R. Grayson and Michael E. Stillman. Macaulay2, a software system for research in algebraic geometry. Available at http://www2.macaulay2.com . [18] Robert Anthony Mills Gregson. Time series in psychology . Psychology Press, 2014. [19] Ben Hambly and Terry Lyons. Uniqueness for the signature of a path of bounded variation and the reduced path group. Ann. of Math. , 171(1):109–167, 2010. [20] Tamás Kalmár-Nagy, Gábor Stépán, and Francis C Moon. Subcritical Hopf bifurcation in the delay equation model for machine tool vibrations. Nonlinear Dynamics , 26:121–142, 2001. [21] Andrew Keane, Bernd Krauskopf, and Claire M Postlethwaite. Climate models with delay differential equations. Chaos: An Interdisciplinary Journal of Nonlinear Science , 27(11), 2017. [22] Patrick Kidger. On Neural Differential Equations . PhD thesis, University of Oxford, 2021. [23] SC Kou, Benjamin P Olding, Martin Lysy, and Jun S Liu. A multiresolution method for parameter estimation of diffusion processes. Journal of the American Statistical Association , 107(500):1558–1574, 2012. [24] Christophe Ladroue. Expectation of Stratonovich iterated integrals of Wiener processes, 2010. [25] Anton Leykin. Numerical algebraic geometry. Journal of Software for Algebra and Geometry , 3(1):5–10, 2011. [26] Anton Leykin and Robert Krone. NumericalAlgebraicGeometry: A Macaulay2 package. Version 1.21. A Macaulay2 package available at https://github.com/Macaulay2/M2/tree/master/M2/Macaulay2/packages . [27] Terry Lyons, Michael Caruana, and Thierry Lévy. Differential Equations Driven by Rough Paths . Éc. Été Probab. St.-Flour. Springer-Verlag, Berlin Heidelberg, 2007. [28] Terry J. Lyons. Differential equations driven by rough signals. Revista Matemática Iberoamericana , 14(2):215–310, 1998. PATH-DEPENDENT SDES: SOLUTIONS AND PARAMETER ESTIMATION 41 [29] Georg Manten, Cecilia Casolo, Emilio Ferrucci, Søren Wengel Mogensen, Cristopher Salvi, and Niki Kilber- tus. Signature kernel conditional independence tests in causal discovery for stochastic processes. arXiv preprint arXiv:2402.18477 , 2024. [30] Jan Mayer. A generalized theorem of Miranda and the theorem of Newton–Kantorovich. Numerical Functional Analysis and Optimization , 23(3-4):333–357, 2002. [31] Richard Nickl and Kolyan Ray. Nonparametric statistical inference for drift vector fields of multi-dimensional diffusions. The Annals of Statistics , 48(3):1383–1408, 2020. [32] Jan Nygaard Nielsen, Henrik Madsen, and Peter C Young. Parameter estimation in stochastic differential equa- tions: an overview. Annual Reviews in Control , 24:83–94, 2000. [33] Masao Ogaki. 17 generalized method of moments: Econometric applications. In Econometrics , volume 11 of Handbook of Statistics , pages 455–488. Elsevier, 1993. [34] Peter J Olver.
|
https://arxiv.org/abs/2505.22646v1
|
Applications of Lie groups to differential equations , volume 107. Springer Science & Business Media, 1993. [35] Anastasia Papavasiliou and Christophe Ladroue. Parameter estimation for rough differential equations. Annals of Statistics , 39(4):2047–2073, 2011. [36] Grigorios A Pavliotis. Stochastic processes and applications. Texts in Applied Mathematics , 60, 2014. [37] Asger Roer Pedersen. A new approach to maximum likelihood estimation for stochastic differential equations based on discrete observations. Scandinavian journal of statistics , pages 55–71, 1995. [38] Jonas Peters, Stefan Bauer, and Niklas Pfister. Causal Models for Dynamical Systems , page 671–690. Association for Computing Machinery, New York, NY, USA, 1 edition, 2022. [39] Jeremy Reizenstein and Benjamin Graham. Algorithm 1004: The iisignature library: Efficient calculation of iterated-integral signatures and log signatures. ACM Transactions on Mathematical Software (TOMS) , 2020. [40] Alexandre René and André Longtin. Mean, covariance, and effective dimension of stochastic distributed delay dynamics. Chaos: An Interdisciplinary Journal of Nonlinear Science , 27(11), 2017. [41] Louis Sharrock, Nikolas Kantas, Panos Parpas, and Grigorios A Pavliotis. Parameter estimation for the McKean- Vlasov stochastic differential equation. arXiv preprint arXiv:2106.13751 , 2021. [42] Elias M. Stein. Singular Integrals and Differentiability Properties of Functions (PMS-30) . Princeton University Press, 1970. [43] Shahab Torkamani, Eric A Butcher, and Firas A Khasawneh. Parameter identification in periodic delay differential equations with distributed delay. Communications in Nonlinear Science and Numerical Simulation , 18(4):1016–1026, 2013. Email address :[email protected] Department of Mathematics , University of British Columbia , Vancouver , BC, C anada V6T 1Z2 Email address :[email protected] Department of Mathematics , University of British Columbia , Vancouver , BC, C anada V6T 1Z2 Email address :[email protected] Department of Mathematics , University of British Columbia , Vancouver , BC, C anada V6T 1Z2 Email address :[email protected] School of Mathematics and Maxwell Institute , University of Edinburgh , Edinburgh EH9 3FD, S cotland
|
https://arxiv.org/abs/2505.22646v1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.